Nov 24 11:16:27 crc systemd[1]: Starting Kubernetes Kubelet... Nov 24 11:16:27 crc restorecon[4677]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:27 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:16:28 crc restorecon[4677]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 11:16:28 crc restorecon[4677]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 24 11:16:29 crc kubenswrapper[4678]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:16:29 crc kubenswrapper[4678]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 24 11:16:29 crc kubenswrapper[4678]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:16:29 crc kubenswrapper[4678]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:16:29 crc kubenswrapper[4678]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 24 11:16:29 crc kubenswrapper[4678]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.631283 4678 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639424 4678 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639454 4678 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639460 4678 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639467 4678 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639474 4678 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639480 4678 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639486 4678 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639493 4678 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639499 4678 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639506 4678 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639513 4678 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639520 4678 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639527 4678 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639533 4678 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639540 4678 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639547 4678 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639552 4678 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639568 4678 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639573 4678 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639579 4678 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639586 4678 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639593 4678 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639598 4678 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639604 4678 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639610 4678 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639615 4678 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639621 4678 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639628 4678 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639634 4678 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639640 4678 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639645 4678 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639651 4678 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639656 4678 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639661 4678 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639687 4678 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639693 4678 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639698 4678 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639704 4678 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639710 4678 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639715 4678 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639722 4678 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639729 4678 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639743 4678 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639749 4678 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639755 4678 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639761 4678 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639766 4678 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639772 4678 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639777 4678 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639782 4678 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639788 4678 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639793 4678 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639799 4678 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639804 4678 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639809 4678 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639815 4678 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639820 4678 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639825 4678 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639830 4678 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639837 4678 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639844 4678 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639850 4678 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639858 4678 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639863 4678 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639869 4678 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639874 4678 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639880 4678 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639885 4678 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639890 4678 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639895 4678 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.639900 4678 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640014 4678 flags.go:64] FLAG: --address="0.0.0.0" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640028 4678 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640037 4678 flags.go:64] FLAG: --anonymous-auth="true" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640046 4678 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640053 4678 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640059 4678 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640068 4678 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640075 4678 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640082 4678 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640089 4678 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640096 4678 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640103 4678 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640109 4678 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640115 4678 flags.go:64] FLAG: --cgroup-root="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640121 4678 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640127 4678 flags.go:64] FLAG: --client-ca-file="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640133 4678 flags.go:64] FLAG: --cloud-config="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640139 4678 flags.go:64] FLAG: --cloud-provider="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640146 4678 flags.go:64] FLAG: --cluster-dns="[]" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640154 4678 flags.go:64] FLAG: --cluster-domain="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640160 4678 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640166 4678 flags.go:64] FLAG: --config-dir="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640172 4678 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640179 4678 flags.go:64] FLAG: --container-log-max-files="5" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640187 4678 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640193 4678 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640199 4678 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640206 4678 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640212 4678 flags.go:64] FLAG: --contention-profiling="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640218 4678 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640224 4678 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640230 4678 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640237 4678 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640244 4678 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640251 4678 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640257 4678 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640263 4678 flags.go:64] FLAG: --enable-load-reader="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640269 4678 flags.go:64] FLAG: --enable-server="true" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640275 4678 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640282 4678 flags.go:64] FLAG: --event-burst="100" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640290 4678 flags.go:64] FLAG: --event-qps="50" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640296 4678 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640302 4678 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640309 4678 flags.go:64] FLAG: --eviction-hard="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640317 4678 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640324 4678 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640330 4678 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640336 4678 flags.go:64] FLAG: --eviction-soft="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640343 4678 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640348 4678 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640354 4678 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640360 4678 flags.go:64] FLAG: --experimental-mounter-path="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640366 4678 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640372 4678 flags.go:64] FLAG: --fail-swap-on="true" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640379 4678 flags.go:64] FLAG: --feature-gates="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640386 4678 flags.go:64] FLAG: --file-check-frequency="20s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640392 4678 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640398 4678 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640405 4678 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640411 4678 flags.go:64] FLAG: --healthz-port="10248" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640417 4678 flags.go:64] FLAG: --help="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640424 4678 flags.go:64] FLAG: --hostname-override="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640430 4678 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640436 4678 flags.go:64] FLAG: --http-check-frequency="20s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640442 4678 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640448 4678 flags.go:64] FLAG: --image-credential-provider-config="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640454 4678 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640460 4678 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640466 4678 flags.go:64] FLAG: --image-service-endpoint="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640472 4678 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640478 4678 flags.go:64] FLAG: --kube-api-burst="100" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640484 4678 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640490 4678 flags.go:64] FLAG: --kube-api-qps="50" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640497 4678 flags.go:64] FLAG: --kube-reserved="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640503 4678 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640509 4678 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640515 4678 flags.go:64] FLAG: --kubelet-cgroups="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640521 4678 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640527 4678 flags.go:64] FLAG: --lock-file="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640533 4678 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640539 4678 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640545 4678 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640554 4678 flags.go:64] FLAG: --log-json-split-stream="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640560 4678 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640566 4678 flags.go:64] FLAG: --log-text-split-stream="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640573 4678 flags.go:64] FLAG: --logging-format="text" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640580 4678 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640587 4678 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640593 4678 flags.go:64] FLAG: --manifest-url="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640600 4678 flags.go:64] FLAG: --manifest-url-header="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640609 4678 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640617 4678 flags.go:64] FLAG: --max-open-files="1000000" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640624 4678 flags.go:64] FLAG: --max-pods="110" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640630 4678 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640637 4678 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640643 4678 flags.go:64] FLAG: --memory-manager-policy="None" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640649 4678 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640655 4678 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640661 4678 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640685 4678 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640700 4678 flags.go:64] FLAG: --node-status-max-images="50" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640706 4678 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640712 4678 flags.go:64] FLAG: --oom-score-adj="-999" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640719 4678 flags.go:64] FLAG: --pod-cidr="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640725 4678 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640734 4678 flags.go:64] FLAG: --pod-manifest-path="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640740 4678 flags.go:64] FLAG: --pod-max-pids="-1" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640746 4678 flags.go:64] FLAG: --pods-per-core="0" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640752 4678 flags.go:64] FLAG: --port="10250" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640758 4678 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640765 4678 flags.go:64] FLAG: --provider-id="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640771 4678 flags.go:64] FLAG: --qos-reserved="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640776 4678 flags.go:64] FLAG: --read-only-port="10255" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640782 4678 flags.go:64] FLAG: --register-node="true" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640788 4678 flags.go:64] FLAG: --register-schedulable="true" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640794 4678 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640805 4678 flags.go:64] FLAG: --registry-burst="10" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640815 4678 flags.go:64] FLAG: --registry-qps="5" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640821 4678 flags.go:64] FLAG: --reserved-cpus="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640827 4678 flags.go:64] FLAG: --reserved-memory="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640835 4678 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640840 4678 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640846 4678 flags.go:64] FLAG: --rotate-certificates="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640852 4678 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640858 4678 flags.go:64] FLAG: --runonce="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640864 4678 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640871 4678 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640877 4678 flags.go:64] FLAG: --seccomp-default="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640884 4678 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640890 4678 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640896 4678 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640903 4678 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640909 4678 flags.go:64] FLAG: --storage-driver-password="root" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640915 4678 flags.go:64] FLAG: --storage-driver-secure="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640921 4678 flags.go:64] FLAG: --storage-driver-table="stats" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640927 4678 flags.go:64] FLAG: --storage-driver-user="root" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640933 4678 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640940 4678 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640946 4678 flags.go:64] FLAG: --system-cgroups="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640952 4678 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640961 4678 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640967 4678 flags.go:64] FLAG: --tls-cert-file="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640973 4678 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640980 4678 flags.go:64] FLAG: --tls-min-version="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640986 4678 flags.go:64] FLAG: --tls-private-key-file="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640992 4678 flags.go:64] FLAG: --topology-manager-policy="none" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.640998 4678 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.641004 4678 flags.go:64] FLAG: --topology-manager-scope="container" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.641010 4678 flags.go:64] FLAG: --v="2" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.641020 4678 flags.go:64] FLAG: --version="false" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.641028 4678 flags.go:64] FLAG: --vmodule="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.641036 4678 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.641042 4678 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641175 4678 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641182 4678 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641188 4678 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641194 4678 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641200 4678 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641205 4678 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641210 4678 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641215 4678 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641221 4678 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641227 4678 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641232 4678 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641239 4678 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641246 4678 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641252 4678 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641258 4678 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641263 4678 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641270 4678 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641275 4678 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641281 4678 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641286 4678 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641293 4678 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641300 4678 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641306 4678 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641311 4678 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641317 4678 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641323 4678 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641328 4678 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641333 4678 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641341 4678 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641346 4678 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641352 4678 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641357 4678 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641362 4678 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641367 4678 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641372 4678 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641378 4678 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641383 4678 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641388 4678 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641393 4678 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641398 4678 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641404 4678 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641409 4678 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641414 4678 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641419 4678 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641424 4678 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641430 4678 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641437 4678 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641443 4678 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641449 4678 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641454 4678 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641460 4678 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641465 4678 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641491 4678 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641499 4678 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641506 4678 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641512 4678 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641518 4678 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641524 4678 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641529 4678 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641535 4678 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641543 4678 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641549 4678 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641556 4678 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641561 4678 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641567 4678 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641572 4678 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641578 4678 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641583 4678 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641588 4678 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641594 4678 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.641599 4678 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.643385 4678 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.659886 4678 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.659947 4678 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660114 4678 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660136 4678 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660149 4678 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660162 4678 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660174 4678 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660185 4678 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660193 4678 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660201 4678 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660210 4678 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660218 4678 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660226 4678 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660234 4678 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660245 4678 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660257 4678 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660268 4678 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660280 4678 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660289 4678 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660299 4678 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660307 4678 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660315 4678 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660323 4678 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660331 4678 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660339 4678 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660346 4678 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660354 4678 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660362 4678 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660370 4678 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660377 4678 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660387 4678 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660396 4678 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660404 4678 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660411 4678 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660419 4678 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660427 4678 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660435 4678 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660443 4678 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660450 4678 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660458 4678 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660466 4678 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660473 4678 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660482 4678 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660490 4678 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660497 4678 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660505 4678 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660513 4678 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660521 4678 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660528 4678 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660536 4678 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660545 4678 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660554 4678 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660565 4678 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660574 4678 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660583 4678 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660592 4678 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660601 4678 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660610 4678 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660619 4678 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660627 4678 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660635 4678 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660643 4678 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660650 4678 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660660 4678 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660691 4678 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660699 4678 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660707 4678 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660715 4678 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660723 4678 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660731 4678 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660739 4678 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660747 4678 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.660754 4678 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.660768 4678 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661004 4678 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661024 4678 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661037 4678 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661049 4678 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661061 4678 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661073 4678 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661083 4678 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661094 4678 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661104 4678 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661114 4678 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661123 4678 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661131 4678 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661139 4678 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661146 4678 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661155 4678 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661162 4678 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661170 4678 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661178 4678 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661186 4678 feature_gate.go:330] unrecognized feature gate: Example Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661195 4678 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661203 4678 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661211 4678 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661219 4678 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661227 4678 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661237 4678 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661247 4678 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661258 4678 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661267 4678 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661276 4678 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661284 4678 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661292 4678 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661300 4678 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661308 4678 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661316 4678 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661324 4678 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661332 4678 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661340 4678 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661349 4678 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661357 4678 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661365 4678 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661376 4678 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661386 4678 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661396 4678 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661406 4678 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661415 4678 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661426 4678 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661435 4678 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661444 4678 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661453 4678 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661461 4678 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661469 4678 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661477 4678 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661487 4678 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661496 4678 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661504 4678 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661512 4678 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661521 4678 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661528 4678 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661536 4678 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661544 4678 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661552 4678 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661559 4678 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661567 4678 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661579 4678 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661589 4678 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661599 4678 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661608 4678 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661618 4678 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661627 4678 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661635 4678 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.661644 4678 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.661657 4678 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.663027 4678 server.go:940] "Client rotation is on, will bootstrap in background" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.669189 4678 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.669324 4678 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.671285 4678 server.go:997] "Starting client certificate rotation" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.671320 4678 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.672496 4678 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-21 08:23:05.66577611 +0000 UTC Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.672721 4678 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.703533 4678 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.705627 4678 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 11:16:29 crc kubenswrapper[4678]: E1124 11:16:29.707901 4678 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.726923 4678 log.go:25] "Validated CRI v1 runtime API" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.759495 4678 log.go:25] "Validated CRI v1 image API" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.762647 4678 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.769046 4678 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-24-11-12-02-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.769093 4678 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.789117 4678 manager.go:217] Machine: {Timestamp:2025-11-24 11:16:29.786255984 +0000 UTC m=+0.717315643 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:bab8289d-1a3e-4a7d-817f-6b8fdc970a7c BootID:37fc4262-6086-4dd5-aa35-53966bd309d2 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:59:d1:1c Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:59:d1:1c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:c4:e8:42 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:37:0b:ee Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c7:fd:3f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a4:77:61 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ee:db:a8:5c:63:d9 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ca:44:8d:8e:bb:64 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.789390 4678 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.789637 4678 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.791134 4678 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.791583 4678 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.791698 4678 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.792098 4678 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.792117 4678 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.792656 4678 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.792742 4678 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.795311 4678 state_mem.go:36] "Initialized new in-memory state store" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.796228 4678 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.808738 4678 kubelet.go:418] "Attempting to sync node with API server" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.808817 4678 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.808931 4678 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.808969 4678 kubelet.go:324] "Adding apiserver pod source" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.808998 4678 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.816027 4678 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.817114 4678 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 24 11:16:29 crc kubenswrapper[4678]: E1124 11:16:29.817390 4678 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.817235 4678 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.818365 4678 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 24 11:16:29 crc kubenswrapper[4678]: E1124 11:16:29.818491 4678 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.820181 4678 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.821916 4678 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.822060 4678 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.822141 4678 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.822218 4678 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.822300 4678 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.822380 4678 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.822450 4678 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.822535 4678 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.822617 4678 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.822757 4678 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.822868 4678 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.822956 4678 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.824823 4678 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.825584 4678 server.go:1280] "Started kubelet" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.826084 4678 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.826089 4678 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.826185 4678 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.826704 4678 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 11:16:29 crc systemd[1]: Started Kubernetes Kubelet. Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.829749 4678 server.go:460] "Adding debug handlers to kubelet server" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.830326 4678 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.830573 4678 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.830554 4678 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 08:20:45.233023314 +0000 UTC Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.830740 4678 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 669h4m15.402302469s for next certificate rotation Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.830830 4678 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.830853 4678 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 24 11:16:29 crc kubenswrapper[4678]: E1124 11:16:29.831038 4678 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.831102 4678 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.839472 4678 factory.go:55] Registering systemd factory Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.839526 4678 factory.go:221] Registration of the systemd container factory successfully Nov 24 11:16:29 crc kubenswrapper[4678]: E1124 11:16:29.839821 4678 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="200ms" Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.839960 4678 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 24 11:16:29 crc kubenswrapper[4678]: E1124 11:16:29.840104 4678 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.840212 4678 factory.go:153] Registering CRI-O factory Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.840248 4678 factory.go:221] Registration of the crio container factory successfully Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.840383 4678 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.840443 4678 factory.go:103] Registering Raw factory Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.840472 4678 manager.go:1196] Started watching for new ooms in manager Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.841929 4678 manager.go:319] Starting recovery of all containers Nov 24 11:16:29 crc kubenswrapper[4678]: E1124 11:16:29.841990 4678 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.214:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187aed2eea6220d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 11:16:29.825548498 +0000 UTC m=+0.756608157,LastTimestamp:2025-11-24 11:16:29.825548498 +0000 UTC m=+0.756608157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.847816 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.847865 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.847877 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.847888 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.847898 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.847907 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.847917 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.847974 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.847985 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.847994 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848007 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848035 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848043 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848054 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848065 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848074 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848084 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848095 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848106 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848116 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848146 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848159 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848170 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848184 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848195 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848207 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848220 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848253 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848267 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848278 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848291 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.848302 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.849874 4678 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.849903 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.849917 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.849931 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.849945 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.849957 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.849968 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.849980 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.849993 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850006 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850020 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850032 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850045 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850060 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850072 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850083 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850094 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850106 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850116 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850127 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850141 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850157 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850169 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850179 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850191 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850202 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850213 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850225 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850240 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850254 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850281 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850298 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850309 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850334 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850350 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850366 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850378 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850390 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850402 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850461 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850474 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850485 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850496 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850506 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850517 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850528 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850539 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850550 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850559 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850570 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850581 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850592 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850602 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850612 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850622 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850633 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850646 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850656 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850686 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850695 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850707 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850720 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850733 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850744 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850756 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850767 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850778 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850789 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850801 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850813 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850824 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850835 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850846 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850863 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850874 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850886 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850898 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850909 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850920 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850931 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850943 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850953 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850963 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850974 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850984 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.850995 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851006 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851016 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851028 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851038 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851049 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851059 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851069 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851082 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851092 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851102 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851112 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851122 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851132 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851141 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851151 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851162 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851171 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851181 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851195 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851208 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851219 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851229 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851240 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851253 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851283 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851296 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851309 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851319 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851331 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851342 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851354 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851365 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851375 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851386 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851401 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851413 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851434 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851446 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851459 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851472 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851484 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851494 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851505 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851519 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851531 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851543 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851558 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851569 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851579 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851590 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851603 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851614 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851624 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851635 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851648 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851659 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851685 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851699 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851710 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851721 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851732 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851745 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851755 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851767 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851778 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851789 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851801 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851811 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851823 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851835 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851847 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851858 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851870 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851881 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851892 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851908 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851919 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851929 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851939 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851952 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851962 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851973 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851986 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.851997 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.852008 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.852020 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.852031 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.852042 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.852054 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.852064 4678 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.852073 4678 reconstruct.go:97] "Volume reconstruction finished" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.852081 4678 reconciler.go:26] "Reconciler: start to sync state" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.867827 4678 manager.go:324] Recovery completed Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.877949 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.880211 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.880249 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.880259 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.882158 4678 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.882175 4678 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.882195 4678 state_mem.go:36] "Initialized new in-memory state store" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.890921 4678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.894244 4678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.894288 4678 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.894320 4678 kubelet.go:2335] "Starting kubelet main sync loop" Nov 24 11:16:29 crc kubenswrapper[4678]: E1124 11:16:29.894366 4678 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 11:16:29 crc kubenswrapper[4678]: W1124 11:16:29.896338 4678 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 24 11:16:29 crc kubenswrapper[4678]: E1124 11:16:29.896546 4678 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.900965 4678 policy_none.go:49] "None policy: Start" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.901766 4678 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.901789 4678 state_mem.go:35] "Initializing new in-memory state store" Nov 24 11:16:29 crc kubenswrapper[4678]: E1124 11:16:29.931763 4678 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.971689 4678 manager.go:334] "Starting Device Plugin manager" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.974318 4678 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.974353 4678 server.go:79] "Starting device plugin registration server" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.974830 4678 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.974848 4678 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.975860 4678 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.976145 4678 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.976174 4678 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 11:16:29 crc kubenswrapper[4678]: E1124 11:16:29.983839 4678 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.994542 4678 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.994692 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.995873 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.995910 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.995919 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.996080 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.996402 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.996469 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.996987 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.997028 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.997039 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.997165 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.997402 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.997456 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.997529 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.997545 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.997553 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.997826 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.997854 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.997865 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.998091 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.998576 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.998607 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.998619 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.998642 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.998654 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.999069 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.999093 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.999103 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.999235 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.999578 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.999602 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.999614 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.999781 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 11:16:29 crc kubenswrapper[4678]: I1124 11:16:29.999816 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.000282 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.000318 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.000330 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.000550 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.000585 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.000760 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.000782 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.000793 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.002647 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.002832 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.002939 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:30 crc kubenswrapper[4678]: E1124 11:16:30.041580 4678 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="400ms" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.054561 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.054852 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.054909 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.054938 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.054958 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.054982 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.055009 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.055036 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.055173 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.055266 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.055425 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.055480 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.055512 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.055539 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.055563 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.075877 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.077410 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.077459 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.077472 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.077502 4678 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:16:30 crc kubenswrapper[4678]: E1124 11:16:30.078207 4678 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.214:6443: connect: connection refused" node="crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157197 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157315 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157359 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157398 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157436 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157447 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157493 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157470 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157470 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157563 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157553 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157601 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157630 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157564 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157659 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157752 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157769 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157808 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157843 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157876 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157907 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157945 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157961 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157979 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.157988 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.158007 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.158024 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.158028 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.158004 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.158112 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.279393 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.281636 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.281742 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.281763 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.281805 4678 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:16:30 crc kubenswrapper[4678]: E1124 11:16:30.282608 4678 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.214:6443: connect: connection refused" node="crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.332316 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.357631 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.365339 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.370723 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: W1124 11:16:30.381043 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-51c75a4865f2840819a483592ff11abca15cfaa9283694bd743b5c0449f8e1c9 WatchSource:0}: Error finding container 51c75a4865f2840819a483592ff11abca15cfaa9283694bd743b5c0449f8e1c9: Status 404 returned error can't find the container with id 51c75a4865f2840819a483592ff11abca15cfaa9283694bd743b5c0449f8e1c9 Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.390062 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:30 crc kubenswrapper[4678]: W1124 11:16:30.410332 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-09e79b6576d63642ec7779c7c112d23ab7bc2a719f7b2ec63582d3457ee654ba WatchSource:0}: Error finding container 09e79b6576d63642ec7779c7c112d23ab7bc2a719f7b2ec63582d3457ee654ba: Status 404 returned error can't find the container with id 09e79b6576d63642ec7779c7c112d23ab7bc2a719f7b2ec63582d3457ee654ba Nov 24 11:16:30 crc kubenswrapper[4678]: W1124 11:16:30.414782 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6e6af9c614d1b84f597c5327ffeee86af3f456c0be311389c9c8c5020e70ea47 WatchSource:0}: Error finding container 6e6af9c614d1b84f597c5327ffeee86af3f456c0be311389c9c8c5020e70ea47: Status 404 returned error can't find the container with id 6e6af9c614d1b84f597c5327ffeee86af3f456c0be311389c9c8c5020e70ea47 Nov 24 11:16:30 crc kubenswrapper[4678]: W1124 11:16:30.417327 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-abf4098c2034e5a57d45bb32ff677af44a0c2ea016c1f7b78c9dc3e4f2415f37 WatchSource:0}: Error finding container abf4098c2034e5a57d45bb32ff677af44a0c2ea016c1f7b78c9dc3e4f2415f37: Status 404 returned error can't find the container with id abf4098c2034e5a57d45bb32ff677af44a0c2ea016c1f7b78c9dc3e4f2415f37 Nov 24 11:16:30 crc kubenswrapper[4678]: W1124 11:16:30.421737 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-686647d0f10bc3f805b963900cbf81ccf6de007bfc32b19f5b8692abf32e865e WatchSource:0}: Error finding container 686647d0f10bc3f805b963900cbf81ccf6de007bfc32b19f5b8692abf32e865e: Status 404 returned error can't find the container with id 686647d0f10bc3f805b963900cbf81ccf6de007bfc32b19f5b8692abf32e865e Nov 24 11:16:30 crc kubenswrapper[4678]: E1124 11:16:30.443022 4678 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="800ms" Nov 24 11:16:30 crc kubenswrapper[4678]: W1124 11:16:30.631242 4678 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 24 11:16:30 crc kubenswrapper[4678]: E1124 11:16:30.631366 4678 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:16:30 crc kubenswrapper[4678]: W1124 11:16:30.677864 4678 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 24 11:16:30 crc kubenswrapper[4678]: E1124 11:16:30.678010 4678 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.683586 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.687209 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.687258 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.687272 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.687309 4678 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:16:30 crc kubenswrapper[4678]: E1124 11:16:30.688004 4678 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.214:6443: connect: connection refused" node="crc" Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.827602 4678 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.901345 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"51c75a4865f2840819a483592ff11abca15cfaa9283694bd743b5c0449f8e1c9"} Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.903362 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"686647d0f10bc3f805b963900cbf81ccf6de007bfc32b19f5b8692abf32e865e"} Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.905010 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"abf4098c2034e5a57d45bb32ff677af44a0c2ea016c1f7b78c9dc3e4f2415f37"} Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.906152 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6e6af9c614d1b84f597c5327ffeee86af3f456c0be311389c9c8c5020e70ea47"} Nov 24 11:16:30 crc kubenswrapper[4678]: I1124 11:16:30.907579 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"09e79b6576d63642ec7779c7c112d23ab7bc2a719f7b2ec63582d3457ee654ba"} Nov 24 11:16:30 crc kubenswrapper[4678]: W1124 11:16:30.908257 4678 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 24 11:16:30 crc kubenswrapper[4678]: E1124 11:16:30.908343 4678 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:16:30 crc kubenswrapper[4678]: W1124 11:16:30.911313 4678 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 24 11:16:30 crc kubenswrapper[4678]: E1124 11:16:30.911367 4678 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:16:31 crc kubenswrapper[4678]: E1124 11:16:31.244555 4678 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="1.6s" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.488409 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.490099 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.490147 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.490158 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.490190 4678 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:16:31 crc kubenswrapper[4678]: E1124 11:16:31.490786 4678 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.214:6443: connect: connection refused" node="crc" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.827798 4678 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.899758 4678 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 24 11:16:31 crc kubenswrapper[4678]: E1124 11:16:31.901707 4678 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.912969 4678 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9c49cae4300d033a193064ef4f0b98aa8468fff60d6b21067a0e9cd48965fc03" exitCode=0 Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.913075 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9c49cae4300d033a193064ef4f0b98aa8468fff60d6b21067a0e9cd48965fc03"} Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.913206 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.915038 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.915093 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.915109 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.916830 4678 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2" exitCode=0 Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.916927 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2"} Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.917019 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.918256 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.918296 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.918311 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.920759 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962"} Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.920792 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5"} Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.920804 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c"} Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.920814 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828"} Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.920864 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.922018 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.922045 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.922056 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.922860 4678 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4" exitCode=0 Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.922939 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.922942 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4"} Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.923844 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.923878 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.923891 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.925068 4678 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d4d856633caf65f681108821ea5c34705b1588bd7d839ab8c0630db4efe00241" exitCode=0 Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.925134 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d4d856633caf65f681108821ea5c34705b1588bd7d839ab8c0630db4efe00241"} Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.925323 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.926931 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.926963 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.926977 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.927014 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.928622 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.929713 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:31 crc kubenswrapper[4678]: I1124 11:16:31.929747 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.827925 4678 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 24 11:16:32 crc kubenswrapper[4678]: E1124 11:16:32.845965 4678 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="3.2s" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.938311 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba"} Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.938394 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd"} Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.938413 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0"} Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.938426 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54"} Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.941789 4678 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="356b97141c23284d5aef42027f840aa50a4e31cb47f2b4ef88011c8c474e8c2a" exitCode=0 Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.941880 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"356b97141c23284d5aef42027f840aa50a4e31cb47f2b4ef88011c8c474e8c2a"} Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.941899 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.943048 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.943090 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.943105 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.944181 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.944163 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a6ac819763d72864a1a144895080910c2a12faba46c1b761c5e37ae284bed137"} Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.945058 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.945102 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.945112 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.948693 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aa6b6c8b246f233d00d8ab09e894ced7543605acce05cf29502d4a44b959feed"} Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.948733 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"44115f400ac4e25614d1c5c574fa5ff30b17375cab9d21a0deffbbb1d537a485"} Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.948747 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0f003b0cfebb220e52792a5c28177053e295937e8fbd289da58977ba41c1d6c6"} Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.948795 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.948889 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.949998 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.950036 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.950038 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.950067 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.950079 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.950048 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:32 crc kubenswrapper[4678]: I1124 11:16:32.996916 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.091207 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.092659 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.092723 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.092734 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.092758 4678 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:16:33 crc kubenswrapper[4678]: E1124 11:16:33.093397 4678 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.214:6443: connect: connection refused" node="crc" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.955730 4678 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6ee4678d6d88768c4f83f30bca0f06c9697da23bc35c1c43ea30a85bea50059e" exitCode=0 Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.955820 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6ee4678d6d88768c4f83f30bca0f06c9697da23bc35c1c43ea30a85bea50059e"} Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.955863 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.956855 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.956876 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.956885 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.959124 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.959155 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7"} Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.959235 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.959628 4678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.959683 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.959641 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.960717 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.960733 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.960744 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.960770 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.960782 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.960799 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.960809 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.960816 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.960852 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.960862 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.960869 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:33 crc kubenswrapper[4678]: I1124 11:16:33.960772 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:34 crc kubenswrapper[4678]: I1124 11:16:34.965869 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8dc7e60ec336db411b3c1192707fe68ff8477719c2df85787a88e041516cb833"} Nov 24 11:16:34 crc kubenswrapper[4678]: I1124 11:16:34.965934 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"51daef109047dbfd48f60c3088716c9fcfadd2ff94592e06240869573a49eaf6"} Nov 24 11:16:34 crc kubenswrapper[4678]: I1124 11:16:34.965951 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5de6aa867dd10462e39753512ef93c3e32b8baf2000b123a566044ea4072f362"} Nov 24 11:16:34 crc kubenswrapper[4678]: I1124 11:16:34.965968 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"712fd467877cad1a6db913f343aaafa1330e9d13b00f29ac27541f3899915368"} Nov 24 11:16:34 crc kubenswrapper[4678]: I1124 11:16:34.965938 4678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:16:34 crc kubenswrapper[4678]: I1124 11:16:34.966029 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:34 crc kubenswrapper[4678]: I1124 11:16:34.966877 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:34 crc kubenswrapper[4678]: I1124 11:16:34.966916 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:34 crc kubenswrapper[4678]: I1124 11:16:34.966925 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:35 crc kubenswrapper[4678]: I1124 11:16:35.471023 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:16:35 crc kubenswrapper[4678]: I1124 11:16:35.471284 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:35 crc kubenswrapper[4678]: I1124 11:16:35.480384 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:35 crc kubenswrapper[4678]: I1124 11:16:35.480457 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:35 crc kubenswrapper[4678]: I1124 11:16:35.480470 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:35 crc kubenswrapper[4678]: I1124 11:16:35.972449 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"db24eb51b717c58b3558d9ab761fd79be95cad4ea4a75936fd007a4c0c12dcb6"} Nov 24 11:16:35 crc kubenswrapper[4678]: I1124 11:16:35.972563 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:35 crc kubenswrapper[4678]: I1124 11:16:35.973600 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:35 crc kubenswrapper[4678]: I1124 11:16:35.973704 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:35 crc kubenswrapper[4678]: I1124 11:16:35.973727 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:35 crc kubenswrapper[4678]: I1124 11:16:35.979251 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 24 11:16:35 crc kubenswrapper[4678]: I1124 11:16:35.997839 4678 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 11:16:35 crc kubenswrapper[4678]: I1124 11:16:35.997925 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.029071 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.029294 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.030647 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.030746 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.030764 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.288424 4678 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.294021 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.295433 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.295485 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.295505 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.295537 4678 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.555783 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.556009 4678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.556058 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.557883 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.557925 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.557938 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.882864 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.976316 4678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.976404 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.976419 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.978419 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.978473 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.978419 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.978529 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.978492 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.978551 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.984895 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.985075 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.986519 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.986568 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:36 crc kubenswrapper[4678]: I1124 11:16:36.986585 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:37 crc kubenswrapper[4678]: I1124 11:16:37.521250 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 24 11:16:37 crc kubenswrapper[4678]: I1124 11:16:37.978768 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:37 crc kubenswrapper[4678]: I1124 11:16:37.979906 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:37 crc kubenswrapper[4678]: I1124 11:16:37.979936 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:37 crc kubenswrapper[4678]: I1124 11:16:37.979947 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:38 crc kubenswrapper[4678]: I1124 11:16:38.964785 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:16:38 crc kubenswrapper[4678]: I1124 11:16:38.965616 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:38 crc kubenswrapper[4678]: I1124 11:16:38.966768 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:38 crc kubenswrapper[4678]: I1124 11:16:38.966800 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:38 crc kubenswrapper[4678]: I1124 11:16:38.966810 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:38 crc kubenswrapper[4678]: I1124 11:16:38.973546 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:16:38 crc kubenswrapper[4678]: I1124 11:16:38.981003 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:38 crc kubenswrapper[4678]: I1124 11:16:38.981003 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:38 crc kubenswrapper[4678]: I1124 11:16:38.982054 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:38 crc kubenswrapper[4678]: I1124 11:16:38.982094 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:38 crc kubenswrapper[4678]: I1124 11:16:38.982054 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:38 crc kubenswrapper[4678]: I1124 11:16:38.982132 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:38 crc kubenswrapper[4678]: I1124 11:16:38.982145 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:38 crc kubenswrapper[4678]: I1124 11:16:38.982105 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:39 crc kubenswrapper[4678]: I1124 11:16:39.437782 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:39 crc kubenswrapper[4678]: I1124 11:16:39.438007 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:39 crc kubenswrapper[4678]: I1124 11:16:39.439359 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:39 crc kubenswrapper[4678]: I1124 11:16:39.439412 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:39 crc kubenswrapper[4678]: I1124 11:16:39.439423 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:39 crc kubenswrapper[4678]: E1124 11:16:39.983974 4678 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 11:16:43 crc kubenswrapper[4678]: W1124 11:16:43.723613 4678 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 11:16:43 crc kubenswrapper[4678]: I1124 11:16:43.723747 4678 trace.go:236] Trace[512394511]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:16:33.722) (total time: 10001ms): Nov 24 11:16:43 crc kubenswrapper[4678]: Trace[512394511]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (11:16:43.723) Nov 24 11:16:43 crc kubenswrapper[4678]: Trace[512394511]: [10.001091749s] [10.001091749s] END Nov 24 11:16:43 crc kubenswrapper[4678]: E1124 11:16:43.723778 4678 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 11:16:43 crc kubenswrapper[4678]: W1124 11:16:43.769612 4678 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 11:16:43 crc kubenswrapper[4678]: I1124 11:16:43.769743 4678 trace.go:236] Trace[1832486506]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:16:33.768) (total time: 10001ms): Nov 24 11:16:43 crc kubenswrapper[4678]: Trace[1832486506]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:16:43.769) Nov 24 11:16:43 crc kubenswrapper[4678]: Trace[1832486506]: [10.001606599s] [10.001606599s] END Nov 24 11:16:43 crc kubenswrapper[4678]: E1124 11:16:43.769773 4678 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 11:16:43 crc kubenswrapper[4678]: W1124 11:16:43.775260 4678 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 11:16:43 crc kubenswrapper[4678]: I1124 11:16:43.775372 4678 trace.go:236] Trace[25475235]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:16:33.773) (total time: 10001ms): Nov 24 11:16:43 crc kubenswrapper[4678]: Trace[25475235]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:16:43.775) Nov 24 11:16:43 crc kubenswrapper[4678]: Trace[25475235]: [10.001358315s] [10.001358315s] END Nov 24 11:16:43 crc kubenswrapper[4678]: E1124 11:16:43.775404 4678 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 11:16:43 crc kubenswrapper[4678]: W1124 11:16:43.826527 4678 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 11:16:43 crc kubenswrapper[4678]: I1124 11:16:43.826623 4678 trace.go:236] Trace[737100148]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 11:16:33.825) (total time: 10001ms): Nov 24 11:16:43 crc kubenswrapper[4678]: Trace[737100148]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:16:43.826) Nov 24 11:16:43 crc kubenswrapper[4678]: Trace[737100148]: [10.001562462s] [10.001562462s] END Nov 24 11:16:43 crc kubenswrapper[4678]: E1124 11:16:43.826648 4678 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 11:16:43 crc kubenswrapper[4678]: I1124 11:16:43.828645 4678 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 24 11:16:44 crc kubenswrapper[4678]: I1124 11:16:44.345011 4678 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 11:16:44 crc kubenswrapper[4678]: I1124 11:16:44.345094 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 11:16:44 crc kubenswrapper[4678]: I1124 11:16:44.354597 4678 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 11:16:44 crc kubenswrapper[4678]: I1124 11:16:44.354706 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 11:16:45 crc kubenswrapper[4678]: I1124 11:16:45.476711 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:16:45 crc kubenswrapper[4678]: I1124 11:16:45.476908 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:45 crc kubenswrapper[4678]: I1124 11:16:45.478205 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:45 crc kubenswrapper[4678]: I1124 11:16:45.478317 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:45 crc kubenswrapper[4678]: I1124 11:16:45.478353 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:45 crc kubenswrapper[4678]: I1124 11:16:45.998037 4678 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 11:16:45 crc kubenswrapper[4678]: I1124 11:16:45.998119 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 11:16:46 crc kubenswrapper[4678]: I1124 11:16:46.016629 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 24 11:16:46 crc kubenswrapper[4678]: I1124 11:16:46.016955 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:46 crc kubenswrapper[4678]: I1124 11:16:46.018252 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:46 crc kubenswrapper[4678]: I1124 11:16:46.018311 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:46 crc kubenswrapper[4678]: I1124 11:16:46.018329 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:46 crc kubenswrapper[4678]: I1124 11:16:46.034205 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 24 11:16:46 crc kubenswrapper[4678]: I1124 11:16:46.565036 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:46 crc kubenswrapper[4678]: I1124 11:16:46.565274 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:46 crc kubenswrapper[4678]: I1124 11:16:46.566942 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:46 crc kubenswrapper[4678]: I1124 11:16:46.566982 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:46 crc kubenswrapper[4678]: I1124 11:16:46.566992 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:46 crc kubenswrapper[4678]: I1124 11:16:46.571372 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.000307 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.000311 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.002184 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.002238 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.002256 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.002738 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.002823 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.002845 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.816203 4678 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.820042 4678 apiserver.go:52] "Watching apiserver" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.826398 4678 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.826902 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.827412 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.827621 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:16:47 crc kubenswrapper[4678]: E1124 11:16:47.827791 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.827843 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:16:47 crc kubenswrapper[4678]: E1124 11:16:47.827942 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.828057 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.828555 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.828592 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:47 crc kubenswrapper[4678]: E1124 11:16:47.828824 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.830120 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.830602 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.830748 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.830899 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.830920 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.832168 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.833279 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.833279 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.833709 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.838979 4678 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.859294 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.877201 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.891899 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.904238 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.914577 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.928146 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.939178 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.950311 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:47 crc kubenswrapper[4678]: I1124 11:16:47.965704 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:48 crc kubenswrapper[4678]: I1124 11:16:48.744587 4678 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 24 11:16:48 crc kubenswrapper[4678]: I1124 11:16:48.895114 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:48 crc kubenswrapper[4678]: E1124 11:16:48.895307 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.355336 4678 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.358894 4678 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.360628 4678 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.371933 4678 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.414318 4678 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37550->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.414384 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37550->192.168.126.11:17697: read: connection reset by peer" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.414315 4678 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37546->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.414526 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37546->192.168.126.11:17697: read: connection reset by peer" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.414653 4678 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.414687 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.426847 4678 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.438563 4678 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.438699 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.439239 4678 csr.go:261] certificate signing request csr-9bfkf is approved, waiting to be issued Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.448344 4678 csr.go:257] certificate signing request csr-9bfkf is issued Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459604 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459662 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459709 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459731 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459753 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459772 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459797 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459820 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459839 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459857 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459875 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459893 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459911 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459928 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459945 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459967 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.459988 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460011 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460031 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460052 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460186 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460208 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460228 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460267 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460286 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460304 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460321 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460338 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460355 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460376 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460397 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460416 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460433 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460453 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460474 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460493 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460510 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460527 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460548 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460569 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460589 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460608 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460638 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460656 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460697 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460714 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460744 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460762 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460782 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460803 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460824 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460852 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460870 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460890 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460916 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460945 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460969 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.460992 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461009 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461027 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461046 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461064 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461087 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461106 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461124 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461143 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461162 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461195 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461225 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461250 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461276 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461300 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461335 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461359 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461387 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461416 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461446 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461474 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461498 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461522 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461548 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461572 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461596 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461618 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461635 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461651 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461689 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461707 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461724 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461739 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461756 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461774 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461788 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461813 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461830 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461846 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461862 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461879 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461895 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461917 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461941 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461963 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.461986 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462014 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462033 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462050 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462068 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462084 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462099 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462115 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462131 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462148 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462164 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462179 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462196 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462214 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462232 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462249 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462267 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462286 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462303 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462320 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462335 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462350 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462367 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462384 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462400 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462416 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462432 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462447 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462463 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462478 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462494 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462509 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462527 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462544 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462591 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462607 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462625 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462643 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462659 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462694 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462710 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462727 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462742 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462758 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462775 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462790 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462807 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462830 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462847 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462866 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462883 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462900 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462918 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462935 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462952 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462967 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.462984 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463001 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463017 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463034 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463069 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463085 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463101 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463117 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463134 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463153 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463171 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463188 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463213 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463229 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463245 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463262 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463278 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463296 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463317 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463332 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463351 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463369 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463387 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463405 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463422 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463441 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463459 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463477 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463498 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463522 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463546 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463563 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463582 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463600 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463617 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463634 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463651 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463716 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463743 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463762 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463784 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463813 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463834 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463855 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463876 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463897 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463916 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463936 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463956 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463975 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.463994 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.464128 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.464377 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.464752 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.464761 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.465037 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.465259 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.465266 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.465490 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.465594 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.465679 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.465775 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.465895 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.465930 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.465917 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.465968 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.465961 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.465985 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.466035 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.466203 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.466219 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.466236 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.466266 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.466271 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.466277 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.466479 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.466687 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.466751 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.467029 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.467042 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.467093 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.467200 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.467378 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.467414 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.467568 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.467971 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.464773 4678 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.468121 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.468134 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.468415 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.468489 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.468857 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.468873 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.468903 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.468906 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.469119 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.469174 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.469456 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.469686 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.469847 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.469893 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.470013 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.470130 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.471304 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.471557 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.472493 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.472569 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.472701 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.473015 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.473265 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.473353 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.473641 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.483513 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.483756 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.484506 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.484705 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.484999 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.485134 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.485285 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.485725 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.485731 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.486266 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.486625 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.486746 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.488013 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.488224 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.488656 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.488883 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.489226 4678 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.489302 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:16:49.989278564 +0000 UTC m=+20.920338413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.490416 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.490719 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.491512 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.491822 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:16:49.991800577 +0000 UTC m=+20.922860216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.491823 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.492032 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.492060 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.492170 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.492180 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.492319 4678 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.492458 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:16:49.992431616 +0000 UTC m=+20.923491435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.492710 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.493122 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.493389 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.493767 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.494076 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.494445 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.494453 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.494534 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.494556 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.496879 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.503714 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.503852 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.503895 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.503887 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.503918 4678 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.503863 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.504006 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:16:50.003980716 +0000 UTC m=+20.935040535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.504321 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.504332 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.504424 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.504437 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.504469 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.504980 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.505347 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.505657 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.505864 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.505851 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.508902 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.505959 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.505977 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.510655 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.506154 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.510692 4678 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.510770 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:16:50.0107455 +0000 UTC m=+20.941805319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.506250 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.506306 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.506582 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.506785 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.507064 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.507077 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.507824 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.507936 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.508083 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.508493 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.508600 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.508827 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.509043 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.509049 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.509058 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.509069 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.509097 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.509140 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.509142 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.509400 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.509612 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.510146 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.510177 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.510248 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.510566 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.511086 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.511483 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.511511 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.512614 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.513321 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.513349 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.513426 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.513482 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.513550 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.513873 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.514048 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.514168 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.514447 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.514681 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.514754 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.514796 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.514908 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.515099 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.515406 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.515432 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.515479 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.515741 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.516047 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.516176 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.516300 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.516737 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.516827 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.516863 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.517086 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.517471 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.517632 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.517843 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.518175 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.518487 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.518722 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.518879 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.519338 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.519379 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.522866 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.523202 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.523340 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.523453 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.523848 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.523897 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.523995 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.524115 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.524162 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.524182 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.524215 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.524491 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.525650 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.525848 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.526093 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.526224 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.526317 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.526381 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.526403 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.526426 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.526426 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.526806 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.527168 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.527280 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.527478 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.530488 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.541528 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.550215 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.555746 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565511 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565610 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565711 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565727 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565741 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565756 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565768 4678 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565780 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565792 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565804 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565844 4678 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565856 4678 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565868 4678 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565881 4678 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565890 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565898 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565907 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565903 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565918 4678 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565985 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.565996 4678 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566006 4678 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566016 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566026 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566018 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566036 4678 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566137 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566150 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566162 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566172 4678 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566182 4678 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566193 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566205 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566217 4678 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566229 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566243 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566256 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566278 4678 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566289 4678 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566299 4678 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566309 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566321 4678 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566331 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566342 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566352 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566362 4678 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566372 4678 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566382 4678 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566391 4678 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566401 4678 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566411 4678 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566420 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566430 4678 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566440 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566451 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566463 4678 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566472 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566483 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566493 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566504 4678 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566513 4678 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566524 4678 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566539 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566553 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566565 4678 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566577 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566589 4678 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566602 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566614 4678 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566627 4678 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566639 4678 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566652 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566703 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566717 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566727 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566738 4678 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566750 4678 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566761 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566771 4678 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566782 4678 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566792 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566803 4678 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566812 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566823 4678 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566832 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566842 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566851 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566861 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566870 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566880 4678 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566891 4678 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566899 4678 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566909 4678 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566923 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566940 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566955 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566968 4678 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566980 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.566991 4678 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567000 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567009 4678 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567018 4678 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567026 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567036 4678 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567044 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567055 4678 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567063 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567072 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567081 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567091 4678 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567102 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567111 4678 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567120 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567129 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567138 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567148 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567157 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567166 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567174 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567182 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567192 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567201 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567211 4678 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567219 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567229 4678 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567239 4678 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567249 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567260 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567268 4678 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567277 4678 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567286 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567294 4678 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567304 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567313 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567321 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567331 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567340 4678 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567349 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567358 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567370 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567380 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567391 4678 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567401 4678 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567412 4678 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567421 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567432 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567442 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567452 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567462 4678 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567471 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567480 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567490 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567499 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567509 4678 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567519 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567529 4678 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567538 4678 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567547 4678 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567556 4678 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567564 4678 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567573 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567583 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567591 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567600 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567610 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567620 4678 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567692 4678 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567703 4678 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567714 4678 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567725 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567735 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567745 4678 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567755 4678 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567764 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567773 4678 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567783 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567793 4678 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567802 4678 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567812 4678 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567821 4678 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567831 4678 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567842 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567851 4678 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567861 4678 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567871 4678 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567883 4678 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567893 4678 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567903 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567912 4678 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567921 4678 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567930 4678 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567939 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567948 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567957 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567965 4678 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567976 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.567987 4678 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.568003 4678 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.655997 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.670129 4678 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 24 11:16:49 crc kubenswrapper[4678]: W1124 11:16:49.670975 4678 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.670877 4678 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.214:54704->38.102.83.214:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187aed2f0dfb46c9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 11:16:30.422787785 +0000 UTC m=+1.353847434,LastTimestamp:2025-11-24 11:16:30.422787785 +0000 UTC m=+1.353847434,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.672420 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 11:16:49 crc kubenswrapper[4678]: W1124 11:16:49.672556 4678 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.683817 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 11:16:49 crc kubenswrapper[4678]: W1124 11:16:49.684568 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-5704393d90162ca140f8708bce95a0d90cd149a6c7198f0c5e87e3b4d05914a4 WatchSource:0}: Error finding container 5704393d90162ca140f8708bce95a0d90cd149a6c7198f0c5e87e3b4d05914a4: Status 404 returned error can't find the container with id 5704393d90162ca140f8708bce95a0d90cd149a6c7198f0c5e87e3b4d05914a4 Nov 24 11:16:49 crc kubenswrapper[4678]: W1124 11:16:49.706792 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-832bcdbedad1838a3c85453560b107035440d03ebd33ef4ed27e965499523744 WatchSource:0}: Error finding container 832bcdbedad1838a3c85453560b107035440d03ebd33ef4ed27e965499523744: Status 404 returned error can't find the container with id 832bcdbedad1838a3c85453560b107035440d03ebd33ef4ed27e965499523744 Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.894568 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.894569 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.894750 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:16:49 crc kubenswrapper[4678]: E1124 11:16:49.895049 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.898901 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.900082 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.901190 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.902350 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.902945 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.904016 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.904616 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.905244 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.906507 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.906959 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.907232 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.908389 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.909186 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.910291 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.911097 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.912357 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.912893 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.913579 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.914492 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.915403 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.916797 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.917455 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.918100 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.919055 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.919964 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.920289 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.920927 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.921536 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.922735 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.923275 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.923944 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.924864 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.925330 4678 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.925428 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.927463 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.927987 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.928480 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.930389 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.931077 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.931950 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.932569 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.933510 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.933576 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.934364 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.935160 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.936360 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.937473 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.938136 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.939187 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.939946 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.941428 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.942238 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.943309 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.943940 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.944908 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.945538 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.946150 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.948033 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.958856 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:49 crc kubenswrapper[4678]: I1124 11:16:49.971487 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.010095 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5704393d90162ca140f8708bce95a0d90cd149a6c7198f0c5e87e3b4d05914a4"} Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.012482 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78"} Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.012558 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2d11eb8b129fd856f7ba2f121724aacaa7ca1d6d2ff30c6eeb6c3d6d2a18e60d"} Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.014871 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.018033 4678 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7" exitCode=255 Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.018109 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7"} Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.020084 4678 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.020730 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f"} Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.020794 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016"} Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.020809 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"832bcdbedad1838a3c85453560b107035440d03ebd33ef4ed27e965499523744"} Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.031186 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.033269 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.034083 4678 scope.go:117] "RemoveContainer" containerID="735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.050956 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.062583 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.072441 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.072551 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.072588 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:50 crc kubenswrapper[4678]: E1124 11:16:50.072609 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:16:51.072585779 +0000 UTC m=+22.003645418 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.072637 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.072721 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:50 crc kubenswrapper[4678]: E1124 11:16:50.072840 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:16:50 crc kubenswrapper[4678]: E1124 11:16:50.072862 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:16:50 crc kubenswrapper[4678]: E1124 11:16:50.072927 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:16:50 crc kubenswrapper[4678]: E1124 11:16:50.072931 4678 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:16:50 crc kubenswrapper[4678]: E1124 11:16:50.072948 4678 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:50 crc kubenswrapper[4678]: E1124 11:16:50.072980 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:16:51.07296702 +0000 UTC m=+22.004026659 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:16:50 crc kubenswrapper[4678]: E1124 11:16:50.073034 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:16:51.073006831 +0000 UTC m=+22.004066470 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:50 crc kubenswrapper[4678]: E1124 11:16:50.072892 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:16:50 crc kubenswrapper[4678]: E1124 11:16:50.073075 4678 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:50 crc kubenswrapper[4678]: E1124 11:16:50.073104 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:16:51.073096114 +0000 UTC m=+22.004155993 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:50 crc kubenswrapper[4678]: E1124 11:16:50.073541 4678 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:16:50 crc kubenswrapper[4678]: E1124 11:16:50.073582 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:16:51.073572478 +0000 UTC m=+22.004632357 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.075278 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.086903 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.100264 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.114401 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.129149 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.141768 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.154199 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.167850 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.180463 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.191361 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.449708 4678 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-24 11:11:49 +0000 UTC, rotation deadline is 2026-10-06 19:16:45.110019339 +0000 UTC Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.449806 4678 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7591h59m54.660217497s for next certificate rotation Nov 24 11:16:50 crc kubenswrapper[4678]: I1124 11:16:50.895543 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:50 crc kubenswrapper[4678]: E1124 11:16:50.895743 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.025764 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.029110 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2"} Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.029662 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.047598 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.068192 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.079894 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.079976 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.080002 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.080025 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.080049 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:51 crc kubenswrapper[4678]: E1124 11:16:51.080134 4678 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:16:51 crc kubenswrapper[4678]: E1124 11:16:51.080197 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:16:53.080103393 +0000 UTC m=+24.011163072 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:16:51 crc kubenswrapper[4678]: E1124 11:16:51.080277 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:16:53.080238077 +0000 UTC m=+24.011297956 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:16:51 crc kubenswrapper[4678]: E1124 11:16:51.080289 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:16:51 crc kubenswrapper[4678]: E1124 11:16:51.080289 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:16:51 crc kubenswrapper[4678]: E1124 11:16:51.080334 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:16:51 crc kubenswrapper[4678]: E1124 11:16:51.080330 4678 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:16:51 crc kubenswrapper[4678]: E1124 11:16:51.080452 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:16:53.080422752 +0000 UTC m=+24.011482391 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:16:51 crc kubenswrapper[4678]: E1124 11:16:51.080349 4678 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:51 crc kubenswrapper[4678]: E1124 11:16:51.080528 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:16:53.080521845 +0000 UTC m=+24.011581484 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:51 crc kubenswrapper[4678]: E1124 11:16:51.080306 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:16:51 crc kubenswrapper[4678]: E1124 11:16:51.080560 4678 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:51 crc kubenswrapper[4678]: E1124 11:16:51.080587 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:16:53.080581586 +0000 UTC m=+24.011641225 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.085742 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.101041 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.114414 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.128230 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.144230 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.726163 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-snkj4"] Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.726836 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-snkj4" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.729340 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.731135 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.731539 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.744396 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.759496 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.776930 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.785029 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6ee7405-6c4a-4768-a467-0d931c4143da-hosts-file\") pod \"node-resolver-snkj4\" (UID: \"a6ee7405-6c4a-4768-a467-0d931c4143da\") " pod="openshift-dns/node-resolver-snkj4" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.785081 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gknxh\" (UniqueName: \"kubernetes.io/projected/a6ee7405-6c4a-4768-a467-0d931c4143da-kube-api-access-gknxh\") pod \"node-resolver-snkj4\" (UID: \"a6ee7405-6c4a-4768-a467-0d931c4143da\") " pod="openshift-dns/node-resolver-snkj4" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.792879 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.804200 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.823254 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.849257 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.865198 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.886275 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gknxh\" (UniqueName: \"kubernetes.io/projected/a6ee7405-6c4a-4768-a467-0d931c4143da-kube-api-access-gknxh\") pod \"node-resolver-snkj4\" (UID: \"a6ee7405-6c4a-4768-a467-0d931c4143da\") " pod="openshift-dns/node-resolver-snkj4" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.886361 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6ee7405-6c4a-4768-a467-0d931c4143da-hosts-file\") pod \"node-resolver-snkj4\" (UID: \"a6ee7405-6c4a-4768-a467-0d931c4143da\") " pod="openshift-dns/node-resolver-snkj4" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.886462 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6ee7405-6c4a-4768-a467-0d931c4143da-hosts-file\") pod \"node-resolver-snkj4\" (UID: \"a6ee7405-6c4a-4768-a467-0d931c4143da\") " pod="openshift-dns/node-resolver-snkj4" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.894589 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.894647 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:16:51 crc kubenswrapper[4678]: E1124 11:16:51.894761 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:16:51 crc kubenswrapper[4678]: E1124 11:16:51.894976 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:16:51 crc kubenswrapper[4678]: I1124 11:16:51.906612 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gknxh\" (UniqueName: \"kubernetes.io/projected/a6ee7405-6c4a-4768-a467-0d931c4143da-kube-api-access-gknxh\") pod \"node-resolver-snkj4\" (UID: \"a6ee7405-6c4a-4768-a467-0d931c4143da\") " pod="openshift-dns/node-resolver-snkj4" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.041132 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-snkj4" Nov 24 11:16:52 crc kubenswrapper[4678]: W1124 11:16:52.080026 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6ee7405_6c4a_4768_a467_0d931c4143da.slice/crio-63605e7d9cd14f1e70bc3087a0d333e7d6e852343b5fbc6ec5ad5deaa9190ea4 WatchSource:0}: Error finding container 63605e7d9cd14f1e70bc3087a0d333e7d6e852343b5fbc6ec5ad5deaa9190ea4: Status 404 returned error can't find the container with id 63605e7d9cd14f1e70bc3087a0d333e7d6e852343b5fbc6ec5ad5deaa9190ea4 Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.112860 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hhrs6"] Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.113287 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.113782 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-h24xv"] Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.114261 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.117571 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.117728 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-7tnrj"] Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.117869 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.117939 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.118036 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.118151 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.118227 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.118298 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.118376 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.118429 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.118919 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.119049 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.123101 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.124267 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.139621 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.155928 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.174877 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.189799 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f159c812-75d9-4ad6-9e20-4d208ffe42fb-multus-daemon-config\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.189849 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-etc-kubernetes\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.189870 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lswb\" (UniqueName: \"kubernetes.io/projected/f159c812-75d9-4ad6-9e20-4d208ffe42fb-kube-api-access-4lswb\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.189894 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-os-release\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.189915 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxcdm\" (UniqueName: \"kubernetes.io/projected/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-kube-api-access-zxcdm\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.189932 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-system-cni-dir\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.189949 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d7ceb4b-c0fc-4888-b251-a87db4a2665e-mcd-auth-proxy-config\") pod \"machine-config-daemon-hhrs6\" (UID: \"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\") " pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.189854 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.189978 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-multus-cni-dir\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190155 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0d7ceb4b-c0fc-4888-b251-a87db4a2665e-rootfs\") pod \"machine-config-daemon-hhrs6\" (UID: \"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\") " pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190173 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-var-lib-cni-bin\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190192 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-multus-conf-dir\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190209 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-cni-binary-copy\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190224 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f159c812-75d9-4ad6-9e20-4d208ffe42fb-cni-binary-copy\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190240 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-system-cni-dir\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190256 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-cnibin\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190277 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-var-lib-cni-multus\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190293 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-multus-socket-dir-parent\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190309 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-run-k8s-cni-cncf-io\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190325 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190340 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkfd9\" (UniqueName: \"kubernetes.io/projected/0d7ceb4b-c0fc-4888-b251-a87db4a2665e-kube-api-access-bkfd9\") pod \"machine-config-daemon-hhrs6\" (UID: \"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\") " pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190373 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-var-lib-kubelet\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190390 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-run-netns\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190406 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-hostroot\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190426 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-run-multus-certs\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190445 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190460 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-cnibin\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190483 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d7ceb4b-c0fc-4888-b251-a87db4a2665e-proxy-tls\") pod \"machine-config-daemon-hhrs6\" (UID: \"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\") " pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.190501 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-os-release\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.205583 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.218843 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.234599 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.247741 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.259660 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.274726 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.289704 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.290863 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-system-cni-dir\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.290907 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d7ceb4b-c0fc-4888-b251-a87db4a2665e-mcd-auth-proxy-config\") pod \"machine-config-daemon-hhrs6\" (UID: \"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\") " pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.290925 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-multus-cni-dir\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.290946 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0d7ceb4b-c0fc-4888-b251-a87db4a2665e-rootfs\") pod \"machine-config-daemon-hhrs6\" (UID: \"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\") " pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.290966 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-cni-binary-copy\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.290983 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f159c812-75d9-4ad6-9e20-4d208ffe42fb-cni-binary-copy\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291000 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-var-lib-cni-bin\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291016 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-multus-conf-dir\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291036 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-system-cni-dir\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291054 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-cnibin\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291070 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-var-lib-cni-multus\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291087 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-multus-socket-dir-parent\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291109 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-run-k8s-cni-cncf-io\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291129 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291148 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkfd9\" (UniqueName: \"kubernetes.io/projected/0d7ceb4b-c0fc-4888-b251-a87db4a2665e-kube-api-access-bkfd9\") pod \"machine-config-daemon-hhrs6\" (UID: \"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\") " pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291182 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-var-lib-kubelet\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291200 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-run-netns\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291217 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-hostroot\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291234 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-run-multus-certs\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291255 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291273 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-cnibin\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291295 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d7ceb4b-c0fc-4888-b251-a87db4a2665e-proxy-tls\") pod \"machine-config-daemon-hhrs6\" (UID: \"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\") " pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291313 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-os-release\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291335 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-os-release\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291355 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f159c812-75d9-4ad6-9e20-4d208ffe42fb-multus-daemon-config\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291372 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-etc-kubernetes\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291390 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lswb\" (UniqueName: \"kubernetes.io/projected/f159c812-75d9-4ad6-9e20-4d208ffe42fb-kube-api-access-4lswb\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291408 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxcdm\" (UniqueName: \"kubernetes.io/projected/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-kube-api-access-zxcdm\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291738 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-var-lib-cni-bin\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291775 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-multus-conf-dir\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291828 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-run-multus-certs\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291849 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f159c812-75d9-4ad6-9e20-4d208ffe42fb-cni-binary-copy\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291908 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0d7ceb4b-c0fc-4888-b251-a87db4a2665e-rootfs\") pod \"machine-config-daemon-hhrs6\" (UID: \"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\") " pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291908 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-hostroot\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.291991 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-system-cni-dir\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.292073 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-cnibin\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.292127 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-var-lib-cni-multus\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.292196 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-multus-socket-dir-parent\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.292197 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-multus-cni-dir\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.292228 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-run-k8s-cni-cncf-io\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.292295 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-system-cni-dir\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.292380 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.292430 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-cnibin\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.292476 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-var-lib-kubelet\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.292560 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-host-run-netns\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.292927 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-os-release\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.292926 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-os-release\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.292976 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-cni-binary-copy\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.292977 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f159c812-75d9-4ad6-9e20-4d208ffe42fb-etc-kubernetes\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.293081 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d7ceb4b-c0fc-4888-b251-a87db4a2665e-mcd-auth-proxy-config\") pod \"machine-config-daemon-hhrs6\" (UID: \"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\") " pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.293377 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f159c812-75d9-4ad6-9e20-4d208ffe42fb-multus-daemon-config\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.297840 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d7ceb4b-c0fc-4888-b251-a87db4a2665e-proxy-tls\") pod \"machine-config-daemon-hhrs6\" (UID: \"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\") " pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.303681 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.308558 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.313692 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkfd9\" (UniqueName: \"kubernetes.io/projected/0d7ceb4b-c0fc-4888-b251-a87db4a2665e-kube-api-access-bkfd9\") pod \"machine-config-daemon-hhrs6\" (UID: \"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\") " pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.316004 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lswb\" (UniqueName: \"kubernetes.io/projected/f159c812-75d9-4ad6-9e20-4d208ffe42fb-kube-api-access-4lswb\") pod \"multus-h24xv\" (UID: \"f159c812-75d9-4ad6-9e20-4d208ffe42fb\") " pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.324707 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.336856 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.349008 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.361965 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.372087 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.384124 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.391592 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxcdm\" (UniqueName: \"kubernetes.io/projected/6fdaea25-35e1-4a8b-aabd-ec50fb9af003-kube-api-access-zxcdm\") pod \"multus-additional-cni-plugins-7tnrj\" (UID: \"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\") " pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.399386 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.412598 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.425802 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.432929 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h24xv" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.439153 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" Nov 24 11:16:52 crc kubenswrapper[4678]: W1124 11:16:52.440689 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d7ceb4b_c0fc_4888_b251_a87db4a2665e.slice/crio-ebc883a29ca6c3f5240ffac35842fe04630057bbfb48886c291a5840a71724f2 WatchSource:0}: Error finding container ebc883a29ca6c3f5240ffac35842fe04630057bbfb48886c291a5840a71724f2: Status 404 returned error can't find the container with id ebc883a29ca6c3f5240ffac35842fe04630057bbfb48886c291a5840a71724f2 Nov 24 11:16:52 crc kubenswrapper[4678]: W1124 11:16:52.448764 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf159c812_75d9_4ad6_9e20_4d208ffe42fb.slice/crio-4dd17ee4a373d92dfffacffff69d249ab22850ad48c3441b7a73e99632fde376 WatchSource:0}: Error finding container 4dd17ee4a373d92dfffacffff69d249ab22850ad48c3441b7a73e99632fde376: Status 404 returned error can't find the container with id 4dd17ee4a373d92dfffacffff69d249ab22850ad48c3441b7a73e99632fde376 Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.491271 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zsq5s"] Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.493059 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.496134 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.498794 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.499404 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.499598 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.499814 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.499937 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.499959 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.514895 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.531047 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.550651 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.567435 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.579914 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594197 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-systemd-units\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594243 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-systemd\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594280 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovn-node-metrics-cert\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594298 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovnkube-script-lib\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594389 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-etc-openvswitch\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594427 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovnkube-config\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594500 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-var-lib-openvswitch\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594542 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-kubelet\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594580 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-openvswitch\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594603 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-cni-netd\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594651 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-ovn\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594689 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-node-log\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594738 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-slash\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594765 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-log-socket\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594789 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594817 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-env-overrides\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594861 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-run-netns\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594884 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594916 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-cni-bin\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.594940 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqfl5\" (UniqueName: \"kubernetes.io/projected/318b13d4-6c61-4b45-bb2f-0a7e243946a6-kube-api-access-vqfl5\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.599271 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.620002 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.633747 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.646110 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.661339 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: W1124 11:16:52.666533 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fdaea25_35e1_4a8b_aabd_ec50fb9af003.slice/crio-31b217b8b0f8c033d14d68b1b2cb85c0afbfe47af6eb11902e203fe6184a83c6 WatchSource:0}: Error finding container 31b217b8b0f8c033d14d68b1b2cb85c0afbfe47af6eb11902e203fe6184a83c6: Status 404 returned error can't find the container with id 31b217b8b0f8c033d14d68b1b2cb85c0afbfe47af6eb11902e203fe6184a83c6 Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.681417 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.696416 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-ovn\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.696600 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-ovn\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.696869 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-node-log\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.697007 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.697112 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.697056 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-node-log\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.697140 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-env-overrides\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.697422 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-slash\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.697513 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-slash\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.697446 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.697819 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-log-socket\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.698077 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-run-netns\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.698240 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.698381 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-cni-bin\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.698503 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-cni-bin\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.698310 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.698308 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-run-netns\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.697891 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-env-overrides\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.697914 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-log-socket\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.698955 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqfl5\" (UniqueName: \"kubernetes.io/projected/318b13d4-6c61-4b45-bb2f-0a7e243946a6-kube-api-access-vqfl5\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.699117 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-systemd-units\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.699340 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-systemd\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.699194 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-systemd-units\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.699428 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-systemd\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.699509 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovn-node-metrics-cert\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.699921 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovnkube-script-lib\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.700136 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-etc-openvswitch\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.700285 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-etc-openvswitch\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.700290 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovnkube-config\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.700400 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-var-lib-openvswitch\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.700477 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-kubelet\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.700529 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-openvswitch\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.700574 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-cni-netd\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.700588 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-var-lib-openvswitch\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.700622 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-openvswitch\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.700600 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-kubelet\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.700804 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-cni-netd\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.701229 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovnkube-script-lib\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.701729 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovnkube-config\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.702320 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovn-node-metrics-cert\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.721982 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqfl5\" (UniqueName: \"kubernetes.io/projected/318b13d4-6c61-4b45-bb2f-0a7e243946a6-kube-api-access-vqfl5\") pod \"ovnkube-node-zsq5s\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.848041 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:52 crc kubenswrapper[4678]: I1124 11:16:52.894576 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:52 crc kubenswrapper[4678]: E1124 11:16:52.894772 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.001816 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.006563 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.012255 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.021089 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.037551 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" event={"ID":"6fdaea25-35e1-4a8b-aabd-ec50fb9af003","Type":"ContainerStarted","Data":"9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922"} Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.037623 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" event={"ID":"6fdaea25-35e1-4a8b-aabd-ec50fb9af003","Type":"ContainerStarted","Data":"31b217b8b0f8c033d14d68b1b2cb85c0afbfe47af6eb11902e203fe6184a83c6"} Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.038137 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.039470 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h24xv" event={"ID":"f159c812-75d9-4ad6-9e20-4d208ffe42fb","Type":"ContainerStarted","Data":"8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71"} Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.039506 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h24xv" event={"ID":"f159c812-75d9-4ad6-9e20-4d208ffe42fb","Type":"ContainerStarted","Data":"4dd17ee4a373d92dfffacffff69d249ab22850ad48c3441b7a73e99632fde376"} Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.040790 4678 generic.go:334] "Generic (PLEG): container finished" podID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerID="82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6" exitCode=0 Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.040877 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerDied","Data":"82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6"} Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.040935 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerStarted","Data":"5293548c0572094578227a0ec41195afe36c5f33f902c239464c1c636a22211b"} Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.042526 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-snkj4" event={"ID":"a6ee7405-6c4a-4768-a467-0d931c4143da","Type":"ContainerStarted","Data":"4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46"} Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.042553 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-snkj4" event={"ID":"a6ee7405-6c4a-4768-a467-0d931c4143da","Type":"ContainerStarted","Data":"63605e7d9cd14f1e70bc3087a0d333e7d6e852343b5fbc6ec5ad5deaa9190ea4"} Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.044765 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431"} Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.044814 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6"} Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.044831 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"ebc883a29ca6c3f5240ffac35842fe04630057bbfb48886c291a5840a71724f2"} Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.047395 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1"} Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.055499 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.068315 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.084706 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.098990 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.104203 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:16:53 crc kubenswrapper[4678]: E1124 11:16:53.104336 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:16:57.104310442 +0000 UTC m=+28.035370091 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.104412 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.104506 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:16:53 crc kubenswrapper[4678]: E1124 11:16:53.104561 4678 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:16:53 crc kubenswrapper[4678]: E1124 11:16:53.104650 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:16:57.10462851 +0000 UTC m=+28.035688149 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.104739 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.104772 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:53 crc kubenswrapper[4678]: E1124 11:16:53.104881 4678 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:16:53 crc kubenswrapper[4678]: E1124 11:16:53.104911 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:16:57.104903889 +0000 UTC m=+28.035963528 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:16:53 crc kubenswrapper[4678]: E1124 11:16:53.104989 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:16:53 crc kubenswrapper[4678]: E1124 11:16:53.105001 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:16:53 crc kubenswrapper[4678]: E1124 11:16:53.105013 4678 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:53 crc kubenswrapper[4678]: E1124 11:16:53.105037 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:16:57.105030323 +0000 UTC m=+28.036089962 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:53 crc kubenswrapper[4678]: E1124 11:16:53.105431 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:16:53 crc kubenswrapper[4678]: E1124 11:16:53.105456 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:16:53 crc kubenswrapper[4678]: E1124 11:16:53.105465 4678 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:53 crc kubenswrapper[4678]: E1124 11:16:53.105491 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:16:57.105483925 +0000 UTC m=+28.036543564 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.119224 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.138788 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.152942 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.167219 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.182707 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.200037 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.214541 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.227086 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.241816 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.256911 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.271349 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.288026 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.300332 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.314400 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.327174 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.343580 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.360142 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.380541 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.395046 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:53Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.895496 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:16:53 crc kubenswrapper[4678]: I1124 11:16:53.895603 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:16:53 crc kubenswrapper[4678]: E1124 11:16:53.895685 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:16:53 crc kubenswrapper[4678]: E1124 11:16:53.895814 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.055426 4678 generic.go:334] "Generic (PLEG): container finished" podID="6fdaea25-35e1-4a8b-aabd-ec50fb9af003" containerID="9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922" exitCode=0 Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.055537 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" event={"ID":"6fdaea25-35e1-4a8b-aabd-ec50fb9af003","Type":"ContainerDied","Data":"9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922"} Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.062107 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerStarted","Data":"ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0"} Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.062155 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerStarted","Data":"634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9"} Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.062172 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerStarted","Data":"498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6"} Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.062185 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerStarted","Data":"c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859"} Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.062196 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerStarted","Data":"09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6"} Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.062219 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerStarted","Data":"aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631"} Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.070483 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.087398 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.102286 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.117692 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.135470 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.150743 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.183255 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.200020 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.216821 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.231608 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.245256 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.269183 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.285709 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.391077 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7twxw"] Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.391635 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7twxw" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.394596 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.395209 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.395882 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.396151 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.405955 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.418603 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/533ce88b-4af0-47e6-a890-d25fb0e126be-host\") pod \"node-ca-7twxw\" (UID: \"533ce88b-4af0-47e6-a890-d25fb0e126be\") " pod="openshift-image-registry/node-ca-7twxw" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.418655 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtjlm\" (UniqueName: \"kubernetes.io/projected/533ce88b-4af0-47e6-a890-d25fb0e126be-kube-api-access-gtjlm\") pod \"node-ca-7twxw\" (UID: \"533ce88b-4af0-47e6-a890-d25fb0e126be\") " pod="openshift-image-registry/node-ca-7twxw" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.418726 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/533ce88b-4af0-47e6-a890-d25fb0e126be-serviceca\") pod \"node-ca-7twxw\" (UID: \"533ce88b-4af0-47e6-a890-d25fb0e126be\") " pod="openshift-image-registry/node-ca-7twxw" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.421343 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.434454 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.446413 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.460466 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.476244 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.488519 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.502848 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.516733 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.520151 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/533ce88b-4af0-47e6-a890-d25fb0e126be-serviceca\") pod \"node-ca-7twxw\" (UID: \"533ce88b-4af0-47e6-a890-d25fb0e126be\") " pod="openshift-image-registry/node-ca-7twxw" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.520241 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/533ce88b-4af0-47e6-a890-d25fb0e126be-host\") pod \"node-ca-7twxw\" (UID: \"533ce88b-4af0-47e6-a890-d25fb0e126be\") " pod="openshift-image-registry/node-ca-7twxw" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.520302 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtjlm\" (UniqueName: \"kubernetes.io/projected/533ce88b-4af0-47e6-a890-d25fb0e126be-kube-api-access-gtjlm\") pod \"node-ca-7twxw\" (UID: \"533ce88b-4af0-47e6-a890-d25fb0e126be\") " pod="openshift-image-registry/node-ca-7twxw" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.520369 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/533ce88b-4af0-47e6-a890-d25fb0e126be-host\") pod \"node-ca-7twxw\" (UID: \"533ce88b-4af0-47e6-a890-d25fb0e126be\") " pod="openshift-image-registry/node-ca-7twxw" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.521125 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/533ce88b-4af0-47e6-a890-d25fb0e126be-serviceca\") pod \"node-ca-7twxw\" (UID: \"533ce88b-4af0-47e6-a890-d25fb0e126be\") " pod="openshift-image-registry/node-ca-7twxw" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.531911 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.541117 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtjlm\" (UniqueName: \"kubernetes.io/projected/533ce88b-4af0-47e6-a890-d25fb0e126be-kube-api-access-gtjlm\") pod \"node-ca-7twxw\" (UID: \"533ce88b-4af0-47e6-a890-d25fb0e126be\") " pod="openshift-image-registry/node-ca-7twxw" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.546662 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.567555 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.578177 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.592005 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:54Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.716869 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7twxw" Nov 24 11:16:54 crc kubenswrapper[4678]: I1124 11:16:54.895332 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:54 crc kubenswrapper[4678]: E1124 11:16:54.895989 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.068188 4678 generic.go:334] "Generic (PLEG): container finished" podID="6fdaea25-35e1-4a8b-aabd-ec50fb9af003" containerID="1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae" exitCode=0 Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.068254 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" event={"ID":"6fdaea25-35e1-4a8b-aabd-ec50fb9af003","Type":"ContainerDied","Data":"1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae"} Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.070423 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7twxw" event={"ID":"533ce88b-4af0-47e6-a890-d25fb0e126be","Type":"ContainerStarted","Data":"c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3"} Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.070452 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7twxw" event={"ID":"533ce88b-4af0-47e6-a890-d25fb0e126be","Type":"ContainerStarted","Data":"8724e0ea955c7012fb047a220128d3776335af5a7bf045c4ff8cc96c760b668f"} Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.095321 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.118304 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.138427 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.153070 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.171230 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.190547 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.208768 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.227529 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.243062 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.254975 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.275379 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.297001 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.312645 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.333296 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.353275 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.370908 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.387777 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.401539 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.415726 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.429466 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.448885 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.467834 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.492372 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.507373 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.522518 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.535312 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.553778 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.566504 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.761102 4678 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.763863 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.763922 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.763940 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.764092 4678 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.778786 4678 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.779213 4678 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.780755 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.780792 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.780804 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.780823 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.780837 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:55Z","lastTransitionTime":"2025-11-24T11:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:55 crc kubenswrapper[4678]: E1124 11:16:55.826171 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.832762 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.832818 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.832835 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.832861 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.832876 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:55Z","lastTransitionTime":"2025-11-24T11:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:55 crc kubenswrapper[4678]: E1124 11:16:55.854079 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.859018 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.859063 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.859073 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.859092 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.859105 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:55Z","lastTransitionTime":"2025-11-24T11:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:55 crc kubenswrapper[4678]: E1124 11:16:55.874191 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.878563 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.878610 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.878624 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.878646 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.878662 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:55Z","lastTransitionTime":"2025-11-24T11:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:55 crc kubenswrapper[4678]: E1124 11:16:55.893215 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.896889 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.896981 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:16:55 crc kubenswrapper[4678]: E1124 11:16:55.897048 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:16:55 crc kubenswrapper[4678]: E1124 11:16:55.897171 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.899504 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.899534 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.899548 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.899566 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.899580 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:55Z","lastTransitionTime":"2025-11-24T11:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:55 crc kubenswrapper[4678]: E1124 11:16:55.914564 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:55Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:55 crc kubenswrapper[4678]: E1124 11:16:55.914761 4678 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.922391 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.922452 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.922466 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.922484 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:55 crc kubenswrapper[4678]: I1124 11:16:55.922498 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:55Z","lastTransitionTime":"2025-11-24T11:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.025719 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.025782 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.025800 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.025827 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.025849 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:56Z","lastTransitionTime":"2025-11-24T11:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.079244 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerStarted","Data":"acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0"} Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.081253 4678 generic.go:334] "Generic (PLEG): container finished" podID="6fdaea25-35e1-4a8b-aabd-ec50fb9af003" containerID="d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0" exitCode=0 Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.081322 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" event={"ID":"6fdaea25-35e1-4a8b-aabd-ec50fb9af003","Type":"ContainerDied","Data":"d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0"} Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.097338 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.111563 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.122908 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.129222 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.129258 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.129269 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.129288 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.129301 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:56Z","lastTransitionTime":"2025-11-24T11:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.139489 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.154803 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.171959 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.186644 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.200341 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.210776 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.226247 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.232049 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.232079 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.232088 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.232105 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.232115 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:56Z","lastTransitionTime":"2025-11-24T11:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.245919 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.258105 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.271368 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.284091 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:56Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.334447 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.334492 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.334507 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.334527 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.334538 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:56Z","lastTransitionTime":"2025-11-24T11:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.437547 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.437588 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.437597 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.437612 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.437623 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:56Z","lastTransitionTime":"2025-11-24T11:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.540779 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.540820 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.540830 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.540853 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.540864 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:56Z","lastTransitionTime":"2025-11-24T11:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.644292 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.644390 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.644417 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.644450 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.644476 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:56Z","lastTransitionTime":"2025-11-24T11:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.750746 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.750812 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.750826 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.750847 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.750861 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:56Z","lastTransitionTime":"2025-11-24T11:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.853072 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.853126 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.853136 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.853153 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.853163 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:56Z","lastTransitionTime":"2025-11-24T11:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.858625 4678 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.895481 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:56 crc kubenswrapper[4678]: E1124 11:16:56.895633 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.956481 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.956541 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.956551 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.956573 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:56 crc kubenswrapper[4678]: I1124 11:16:56.956584 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:56Z","lastTransitionTime":"2025-11-24T11:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.059364 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.059410 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.059421 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.059438 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.059448 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:57Z","lastTransitionTime":"2025-11-24T11:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.089007 4678 generic.go:334] "Generic (PLEG): container finished" podID="6fdaea25-35e1-4a8b-aabd-ec50fb9af003" containerID="4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed" exitCode=0 Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.089081 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" event={"ID":"6fdaea25-35e1-4a8b-aabd-ec50fb9af003","Type":"ContainerDied","Data":"4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed"} Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.106598 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.127167 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.152766 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.153040 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:16:57 crc kubenswrapper[4678]: E1124 11:16:57.153074 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:17:05.153023718 +0000 UTC m=+36.084083407 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.153131 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.153261 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:57 crc kubenswrapper[4678]: E1124 11:16:57.153339 4678 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.153382 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:16:57 crc kubenswrapper[4678]: E1124 11:16:57.153444 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:05.153418999 +0000 UTC m=+36.084478678 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:16:57 crc kubenswrapper[4678]: E1124 11:16:57.153559 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:16:57 crc kubenswrapper[4678]: E1124 11:16:57.153587 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:16:57 crc kubenswrapper[4678]: E1124 11:16:57.153612 4678 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:57 crc kubenswrapper[4678]: E1124 11:16:57.153710 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:05.153653525 +0000 UTC m=+36.084713194 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:57 crc kubenswrapper[4678]: E1124 11:16:57.153782 4678 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:16:57 crc kubenswrapper[4678]: E1124 11:16:57.153834 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:05.15381864 +0000 UTC m=+36.084878319 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.153845 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:57 crc kubenswrapper[4678]: E1124 11:16:57.154532 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:16:57 crc kubenswrapper[4678]: E1124 11:16:57.154581 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:16:57 crc kubenswrapper[4678]: E1124 11:16:57.154603 4678 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:57 crc kubenswrapper[4678]: E1124 11:16:57.154710 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:05.154647504 +0000 UTC m=+36.085707183 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.163095 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.163153 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.163168 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.163189 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.163203 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:57Z","lastTransitionTime":"2025-11-24T11:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.174628 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.193412 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.210761 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.224417 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.236690 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.259071 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.265522 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.265563 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.265573 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.265589 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.265600 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:57Z","lastTransitionTime":"2025-11-24T11:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.273144 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.289366 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.308034 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.326633 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.340790 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:57Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.368553 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.368599 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.368610 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.368629 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.368640 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:57Z","lastTransitionTime":"2025-11-24T11:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.472303 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.472352 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.472360 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.472376 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.472386 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:57Z","lastTransitionTime":"2025-11-24T11:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.575181 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.575236 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.575247 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.575265 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.575276 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:57Z","lastTransitionTime":"2025-11-24T11:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.677577 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.677622 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.677635 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.677652 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.677686 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:57Z","lastTransitionTime":"2025-11-24T11:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.780791 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.780851 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.780871 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.780897 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.780916 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:57Z","lastTransitionTime":"2025-11-24T11:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.884277 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.884327 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.884339 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.884357 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.884370 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:57Z","lastTransitionTime":"2025-11-24T11:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.895197 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.895225 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:16:57 crc kubenswrapper[4678]: E1124 11:16:57.895334 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:16:57 crc kubenswrapper[4678]: E1124 11:16:57.895385 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.987661 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.987717 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.987726 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.987741 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:57 crc kubenswrapper[4678]: I1124 11:16:57.987754 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:57Z","lastTransitionTime":"2025-11-24T11:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.090442 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.090483 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.090496 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.090515 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.090529 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:58Z","lastTransitionTime":"2025-11-24T11:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.100125 4678 generic.go:334] "Generic (PLEG): container finished" podID="6fdaea25-35e1-4a8b-aabd-ec50fb9af003" containerID="edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b" exitCode=0 Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.100175 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" event={"ID":"6fdaea25-35e1-4a8b-aabd-ec50fb9af003","Type":"ContainerDied","Data":"edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b"} Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.122849 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.141181 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.158501 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.172585 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.185048 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.198604 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.198644 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.198656 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.198700 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.198715 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:58Z","lastTransitionTime":"2025-11-24T11:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.200780 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.217862 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.235773 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.249526 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.262095 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.284992 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.298274 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.301153 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.301185 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.301197 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.301217 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.301230 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:58Z","lastTransitionTime":"2025-11-24T11:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.316983 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.329388 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.403976 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.404011 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.404022 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.404038 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.404049 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:58Z","lastTransitionTime":"2025-11-24T11:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.506840 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.506895 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.506910 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.506931 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.506944 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:58Z","lastTransitionTime":"2025-11-24T11:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.610450 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.610504 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.610516 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.610537 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.610553 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:58Z","lastTransitionTime":"2025-11-24T11:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.714074 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.714145 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.714164 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.714193 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.714215 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:58Z","lastTransitionTime":"2025-11-24T11:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.818610 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.818728 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.818750 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.818778 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.818797 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:58Z","lastTransitionTime":"2025-11-24T11:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.830857 4678 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.895225 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:16:58 crc kubenswrapper[4678]: E1124 11:16:58.895422 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.921501 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.921554 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.921572 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.921599 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:58 crc kubenswrapper[4678]: I1124 11:16:58.921619 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:58Z","lastTransitionTime":"2025-11-24T11:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.024834 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.024894 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.024908 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.024932 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.024949 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:59Z","lastTransitionTime":"2025-11-24T11:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.111247 4678 generic.go:334] "Generic (PLEG): container finished" podID="6fdaea25-35e1-4a8b-aabd-ec50fb9af003" containerID="fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd" exitCode=0 Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.111374 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" event={"ID":"6fdaea25-35e1-4a8b-aabd-ec50fb9af003","Type":"ContainerDied","Data":"fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd"} Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.119797 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerStarted","Data":"48cd972a6e2509ab61555292531353785c7fa639e8ebe15b2ca75fd5e0072f73"} Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.120309 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.120366 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.128714 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.128774 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.128795 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.128825 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.128847 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:59Z","lastTransitionTime":"2025-11-24T11:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.139735 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.165430 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.167810 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.169429 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.179869 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.195048 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.213845 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.228145 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.232184 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.232228 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.232238 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.232255 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.232269 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:59Z","lastTransitionTime":"2025-11-24T11:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.248072 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.270376 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.292507 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.319620 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.334721 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.334770 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.334781 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.334799 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.334815 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:59Z","lastTransitionTime":"2025-11-24T11:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.356971 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.369406 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.389289 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.404445 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.420550 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.431711 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.437241 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.437272 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.437286 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.437303 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.437315 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:59Z","lastTransitionTime":"2025-11-24T11:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.446506 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.463333 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.481645 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.502652 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.517074 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.527999 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.540707 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.540759 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.540771 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.540790 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.540801 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:59Z","lastTransitionTime":"2025-11-24T11:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.543386 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.556014 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.573299 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.591749 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd972a6e2509ab61555292531353785c7fa639e8ebe15b2ca75fd5e0072f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.605407 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.618720 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.643898 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.643940 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.643950 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.643965 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.643977 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:59Z","lastTransitionTime":"2025-11-24T11:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.747568 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.747632 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.747644 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.747660 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.747686 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:59Z","lastTransitionTime":"2025-11-24T11:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.851203 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.851308 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.851325 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.851353 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.851373 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:59Z","lastTransitionTime":"2025-11-24T11:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.895109 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:16:59 crc kubenswrapper[4678]: E1124 11:16:59.895370 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.895486 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:16:59 crc kubenswrapper[4678]: E1124 11:16:59.895736 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.918881 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.937500 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.955305 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.955385 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.955416 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.955453 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.955476 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:16:59Z","lastTransitionTime":"2025-11-24T11:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.961193 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:16:59 crc kubenswrapper[4678]: I1124 11:16:59.988928 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:16:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.017892 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.039301 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.058076 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.058121 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.058140 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.058159 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.058175 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:00Z","lastTransitionTime":"2025-11-24T11:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.061813 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.079841 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.108079 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.128784 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.130949 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" event={"ID":"6fdaea25-35e1-4a8b-aabd-ec50fb9af003","Type":"ContainerStarted","Data":"ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7"} Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.131122 4678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.147152 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.161976 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.162053 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.162074 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.162102 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.162119 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:00Z","lastTransitionTime":"2025-11-24T11:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.175694 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd972a6e2509ab61555292531353785c7fa639e8ebe15b2ca75fd5e0072f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.201299 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.221535 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.241476 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.258119 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.265064 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.265128 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.265148 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.265176 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.265196 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:00Z","lastTransitionTime":"2025-11-24T11:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.277286 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.297555 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.319951 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.345711 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.368712 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.368794 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.368814 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.368847 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.368869 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:00Z","lastTransitionTime":"2025-11-24T11:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.370577 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.395163 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.418624 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.439536 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.460078 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.472018 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.472073 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.472091 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.472119 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.472136 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:00Z","lastTransitionTime":"2025-11-24T11:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.495176 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.523029 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd972a6e2509ab61555292531353785c7fa639e8ebe15b2ca75fd5e0072f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.535421 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.574822 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.574873 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.574883 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.574903 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.574913 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:00Z","lastTransitionTime":"2025-11-24T11:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.677191 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.677242 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.677253 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.677272 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.677284 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:00Z","lastTransitionTime":"2025-11-24T11:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.780163 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.780210 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.780221 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.780238 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.780255 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:00Z","lastTransitionTime":"2025-11-24T11:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.883734 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.883804 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.883823 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.883852 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.883870 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:00Z","lastTransitionTime":"2025-11-24T11:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.895534 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:00 crc kubenswrapper[4678]: E1124 11:17:00.895784 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.987049 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.987142 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.987153 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.987174 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:00 crc kubenswrapper[4678]: I1124 11:17:00.987188 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:00Z","lastTransitionTime":"2025-11-24T11:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.090427 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.091125 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.091147 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.091362 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.091391 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:01Z","lastTransitionTime":"2025-11-24T11:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.135163 4678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.194529 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.194584 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.194601 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.194625 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.194646 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:01Z","lastTransitionTime":"2025-11-24T11:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.297477 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.297547 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.297614 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.297654 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.297709 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:01Z","lastTransitionTime":"2025-11-24T11:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.401449 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.401533 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.401557 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.401593 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.401620 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:01Z","lastTransitionTime":"2025-11-24T11:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.505601 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.505720 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.505751 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.505788 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.505815 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:01Z","lastTransitionTime":"2025-11-24T11:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.609186 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.609267 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.609291 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.609414 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.609434 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:01Z","lastTransitionTime":"2025-11-24T11:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.713054 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.713135 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.713153 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.713184 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.713202 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:01Z","lastTransitionTime":"2025-11-24T11:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.817062 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.817123 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.817142 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.817167 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.817182 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:01Z","lastTransitionTime":"2025-11-24T11:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.895814 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.895649 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:01 crc kubenswrapper[4678]: E1124 11:17:01.896106 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:01 crc kubenswrapper[4678]: E1124 11:17:01.896311 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.921434 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.921505 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.921527 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.921552 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:01 crc kubenswrapper[4678]: I1124 11:17:01.921569 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:01Z","lastTransitionTime":"2025-11-24T11:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.025460 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.025521 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.025541 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.025570 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.025592 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:02Z","lastTransitionTime":"2025-11-24T11:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.129638 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.129756 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.129781 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.129814 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.129836 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:02Z","lastTransitionTime":"2025-11-24T11:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.142803 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovnkube-controller/0.log" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.147765 4678 generic.go:334] "Generic (PLEG): container finished" podID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerID="48cd972a6e2509ab61555292531353785c7fa639e8ebe15b2ca75fd5e0072f73" exitCode=1 Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.147846 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerDied","Data":"48cd972a6e2509ab61555292531353785c7fa639e8ebe15b2ca75fd5e0072f73"} Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.149371 4678 scope.go:117] "RemoveContainer" containerID="48cd972a6e2509ab61555292531353785c7fa639e8ebe15b2ca75fd5e0072f73" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.172571 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.188555 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.208923 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.230057 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.232196 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.232239 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.232256 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.232277 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.232291 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:02Z","lastTransitionTime":"2025-11-24T11:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.247633 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.261583 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.274742 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.286784 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.305983 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.324304 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.335874 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.335934 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.335950 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.335973 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.335988 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:02Z","lastTransitionTime":"2025-11-24T11:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.340698 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.365729 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd972a6e2509ab61555292531353785c7fa639e8ebe15b2ca75fd5e0072f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48cd972a6e2509ab61555292531353785c7fa639e8ebe15b2ca75fd5e0072f73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:01Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:17:01.154349 6007 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:17:01.154438 6007 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:17:01.154486 6007 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:17:01.154536 6007 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:17:01.157608 6007 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 11:17:01.157696 6007 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:17:01.157719 6007 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:17:01.157727 6007 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:17:01.157771 6007 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:17:01.157788 6007 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:17:01.157813 6007 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:17:01.157830 6007 factory.go:656] Stopping watch factory\\\\nI1124 11:17:01.157854 6007 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.382409 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.400875 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:02Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.439097 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.439405 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.439470 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.439545 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.439649 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:02Z","lastTransitionTime":"2025-11-24T11:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.543749 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.543804 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.543812 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.543830 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.543843 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:02Z","lastTransitionTime":"2025-11-24T11:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.647694 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.647745 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.647756 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.647778 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.647791 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:02Z","lastTransitionTime":"2025-11-24T11:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.750922 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.750971 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.750982 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.750998 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.751016 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:02Z","lastTransitionTime":"2025-11-24T11:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.855818 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.855866 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.855875 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.855892 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.855902 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:02Z","lastTransitionTime":"2025-11-24T11:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.894949 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:02 crc kubenswrapper[4678]: E1124 11:17:02.895168 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.959333 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.959620 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.959725 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.959818 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:02 crc kubenswrapper[4678]: I1124 11:17:02.959898 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:02Z","lastTransitionTime":"2025-11-24T11:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.063354 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.063440 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.063460 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.063492 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.063510 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:03Z","lastTransitionTime":"2025-11-24T11:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.153963 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovnkube-controller/1.log" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.155009 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovnkube-controller/0.log" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.159274 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerStarted","Data":"fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535"} Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.160359 4678 scope.go:117] "RemoveContainer" containerID="fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535" Nov 24 11:17:03 crc kubenswrapper[4678]: E1124 11:17:03.160612 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.166494 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.166531 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.166542 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.166562 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.166576 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:03Z","lastTransitionTime":"2025-11-24T11:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.183336 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.202897 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.218623 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.231402 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.242613 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.255902 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.269137 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.269834 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.269906 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.269928 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.269958 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.269981 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:03Z","lastTransitionTime":"2025-11-24T11:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.286393 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.302792 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.322453 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.337111 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.352695 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.373006 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.373048 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.373058 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.373075 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.373085 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:03Z","lastTransitionTime":"2025-11-24T11:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.377655 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48cd972a6e2509ab61555292531353785c7fa639e8ebe15b2ca75fd5e0072f73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:01Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:17:01.154349 6007 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:17:01.154438 6007 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:17:01.154486 6007 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:17:01.154536 6007 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 11:17:01.157608 6007 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 11:17:01.157696 6007 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:17:01.157719 6007 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:17:01.157727 6007 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:17:01.157771 6007 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:17:01.157788 6007 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 11:17:01.157813 6007 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 11:17:01.157830 6007 factory.go:656] Stopping watch factory\\\\nI1124 11:17:01.157854 6007 ovnkube.go:599] Stopped ovnkube\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:03Z\\\",\\\"message\\\":\\\"te Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:17:03.125174 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:17:03.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.391366 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.476459 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.476534 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.476552 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.476580 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.476602 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:03Z","lastTransitionTime":"2025-11-24T11:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.579740 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.579814 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.579835 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.579867 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.579887 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:03Z","lastTransitionTime":"2025-11-24T11:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.682874 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.682932 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.682943 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.682965 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.682977 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:03Z","lastTransitionTime":"2025-11-24T11:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.786072 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.786144 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.786169 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.786199 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.786223 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:03Z","lastTransitionTime":"2025-11-24T11:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.889202 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.889254 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.889265 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.889282 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.889293 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:03Z","lastTransitionTime":"2025-11-24T11:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.895643 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.895643 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:03 crc kubenswrapper[4678]: E1124 11:17:03.895822 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:03 crc kubenswrapper[4678]: E1124 11:17:03.895876 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.992276 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.992323 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.992335 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.992354 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:03 crc kubenswrapper[4678]: I1124 11:17:03.992365 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:03Z","lastTransitionTime":"2025-11-24T11:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.095560 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.095620 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.095630 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.095703 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.095720 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:04Z","lastTransitionTime":"2025-11-24T11:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.165811 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovnkube-controller/1.log" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.166544 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovnkube-controller/0.log" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.169490 4678 generic.go:334] "Generic (PLEG): container finished" podID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerID="fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535" exitCode=1 Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.169546 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerDied","Data":"fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535"} Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.169601 4678 scope.go:117] "RemoveContainer" containerID="48cd972a6e2509ab61555292531353785c7fa639e8ebe15b2ca75fd5e0072f73" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.171196 4678 scope.go:117] "RemoveContainer" containerID="fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535" Nov 24 11:17:04 crc kubenswrapper[4678]: E1124 11:17:04.171797 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.190053 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.198300 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.198347 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.198362 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.198381 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.198396 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:04Z","lastTransitionTime":"2025-11-24T11:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.209615 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.225636 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.245072 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.268530 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.287235 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.301662 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.301753 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.301770 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.301798 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.301818 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:04Z","lastTransitionTime":"2025-11-24T11:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.308505 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.327272 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.348297 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.365318 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.385471 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.405783 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.407873 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.407957 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.407980 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.408022 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.408063 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:04Z","lastTransitionTime":"2025-11-24T11:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.435429 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:03Z\\\",\\\"message\\\":\\\"te Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:17:03.125174 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:17:03.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.451070 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.511511 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.511628 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.511659 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.511739 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.511769 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:04Z","lastTransitionTime":"2025-11-24T11:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.615389 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.615467 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.615487 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.615513 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.615534 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:04Z","lastTransitionTime":"2025-11-24T11:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.715276 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc"] Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.715794 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.719839 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.720220 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.720295 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.720309 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.720330 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.720345 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:04Z","lastTransitionTime":"2025-11-24T11:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.721007 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.736405 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.760028 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.779300 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.798903 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.823386 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.824560 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.824648 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.824704 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.824734 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.824757 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:04Z","lastTransitionTime":"2025-11-24T11:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.841736 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.843098 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b64ed0b-8ce8-48ee-bcb6-551fc853626a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zdtgc\" (UID: \"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.843206 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b64ed0b-8ce8-48ee-bcb6-551fc853626a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zdtgc\" (UID: \"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.843235 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7l9c\" (UniqueName: \"kubernetes.io/projected/6b64ed0b-8ce8-48ee-bcb6-551fc853626a-kube-api-access-w7l9c\") pod \"ovnkube-control-plane-749d76644c-zdtgc\" (UID: \"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.843297 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b64ed0b-8ce8-48ee-bcb6-551fc853626a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zdtgc\" (UID: \"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.868065 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.894737 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.894658 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: E1124 11:17:04.894981 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.921344 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.927826 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.927889 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.927906 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.927934 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.927953 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:04Z","lastTransitionTime":"2025-11-24T11:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.940314 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.944161 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b64ed0b-8ce8-48ee-bcb6-551fc853626a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zdtgc\" (UID: \"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.944326 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b64ed0b-8ce8-48ee-bcb6-551fc853626a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zdtgc\" (UID: \"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.944385 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b64ed0b-8ce8-48ee-bcb6-551fc853626a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zdtgc\" (UID: \"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.944442 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7l9c\" (UniqueName: \"kubernetes.io/projected/6b64ed0b-8ce8-48ee-bcb6-551fc853626a-kube-api-access-w7l9c\") pod \"ovnkube-control-plane-749d76644c-zdtgc\" (UID: \"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.945504 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b64ed0b-8ce8-48ee-bcb6-551fc853626a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zdtgc\" (UID: \"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.945554 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b64ed0b-8ce8-48ee-bcb6-551fc853626a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zdtgc\" (UID: \"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.954274 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b64ed0b-8ce8-48ee-bcb6-551fc853626a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zdtgc\" (UID: \"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.964655 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:04 crc kubenswrapper[4678]: I1124 11:17:04.975646 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7l9c\" (UniqueName: \"kubernetes.io/projected/6b64ed0b-8ce8-48ee-bcb6-551fc853626a-kube-api-access-w7l9c\") pod \"ovnkube-control-plane-749d76644c-zdtgc\" (UID: \"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.001155 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:03Z\\\",\\\"message\\\":\\\"te Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:17:03.125174 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:17:03.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:04Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.016738 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:05Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.032211 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.032384 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.032457 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.032487 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.032508 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:05Z","lastTransitionTime":"2025-11-24T11:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.033248 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.034443 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:05Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.052352 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:05Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:05 crc kubenswrapper[4678]: W1124 11:17:05.054230 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b64ed0b_8ce8_48ee_bcb6_551fc853626a.slice/crio-f6e1cf23c2143bbfb94b9f5aafba74cdfd32cc823f7541d305496e3a873f630c WatchSource:0}: Error finding container f6e1cf23c2143bbfb94b9f5aafba74cdfd32cc823f7541d305496e3a873f630c: Status 404 returned error can't find the container with id f6e1cf23c2143bbfb94b9f5aafba74cdfd32cc823f7541d305496e3a873f630c Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.135261 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.135323 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.135336 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.135359 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.135378 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:05Z","lastTransitionTime":"2025-11-24T11:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.183713 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovnkube-controller/1.log" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.188932 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" event={"ID":"6b64ed0b-8ce8-48ee-bcb6-551fc853626a","Type":"ContainerStarted","Data":"f6e1cf23c2143bbfb94b9f5aafba74cdfd32cc823f7541d305496e3a873f630c"} Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.238767 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.238828 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.238842 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.238866 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.238881 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:05Z","lastTransitionTime":"2025-11-24T11:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.247772 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:17:05 crc kubenswrapper[4678]: E1124 11:17:05.248043 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:17:21.2479889 +0000 UTC m=+52.179048579 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.248172 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.248247 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.248322 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.248417 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:05 crc kubenswrapper[4678]: E1124 11:17:05.248656 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:17:05 crc kubenswrapper[4678]: E1124 11:17:05.248697 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:17:05 crc kubenswrapper[4678]: E1124 11:17:05.248710 4678 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:17:05 crc kubenswrapper[4678]: E1124 11:17:05.248720 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:17:05 crc kubenswrapper[4678]: E1124 11:17:05.248763 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:17:05 crc kubenswrapper[4678]: E1124 11:17:05.248790 4678 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:17:05 crc kubenswrapper[4678]: E1124 11:17:05.248769 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:21.248750802 +0000 UTC m=+52.179810441 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:17:05 crc kubenswrapper[4678]: E1124 11:17:05.248888 4678 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:17:05 crc kubenswrapper[4678]: E1124 11:17:05.248903 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:21.248877915 +0000 UTC m=+52.179937734 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:17:05 crc kubenswrapper[4678]: E1124 11:17:05.248926 4678 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:17:05 crc kubenswrapper[4678]: E1124 11:17:05.248984 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:21.248957848 +0000 UTC m=+52.180017497 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:17:05 crc kubenswrapper[4678]: E1124 11:17:05.249029 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:21.248995339 +0000 UTC m=+52.180054988 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.341342 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.341383 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.341393 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.341409 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.341420 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:05Z","lastTransitionTime":"2025-11-24T11:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.444802 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.444846 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.444855 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.444870 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.444880 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:05Z","lastTransitionTime":"2025-11-24T11:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.548464 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.548515 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.548529 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.548551 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.548566 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:05Z","lastTransitionTime":"2025-11-24T11:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.651509 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.651566 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.651579 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.651600 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.651612 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:05Z","lastTransitionTime":"2025-11-24T11:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.754447 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.754492 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.754502 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.754518 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.754529 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:05Z","lastTransitionTime":"2025-11-24T11:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.858506 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.858565 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.858575 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.858596 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.858612 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:05Z","lastTransitionTime":"2025-11-24T11:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.863611 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-pg6bk"] Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.864235 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:05 crc kubenswrapper[4678]: E1124 11:17:05.864331 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.884156 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:05Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.895478 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.895549 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:05 crc kubenswrapper[4678]: E1124 11:17:05.895651 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:05 crc kubenswrapper[4678]: E1124 11:17:05.895817 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.904928 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:05Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.917426 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:05Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.934498 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:05Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.952382 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:05Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.957347 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjxrq\" (UniqueName: \"kubernetes.io/projected/dca80848-6c0a-4946-980a-197e2ecfc898-kube-api-access-zjxrq\") pod \"network-metrics-daemon-pg6bk\" (UID: \"dca80848-6c0a-4946-980a-197e2ecfc898\") " pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.957415 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs\") pod \"network-metrics-daemon-pg6bk\" (UID: \"dca80848-6c0a-4946-980a-197e2ecfc898\") " pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.961270 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.961330 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.961350 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.961379 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.961400 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:05Z","lastTransitionTime":"2025-11-24T11:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.965977 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:05Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:05 crc kubenswrapper[4678]: I1124 11:17:05.982857 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:05Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.002333 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.017841 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.030090 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.047335 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.058929 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjxrq\" (UniqueName: \"kubernetes.io/projected/dca80848-6c0a-4946-980a-197e2ecfc898-kube-api-access-zjxrq\") pod \"network-metrics-daemon-pg6bk\" (UID: \"dca80848-6c0a-4946-980a-197e2ecfc898\") " pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.059029 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs\") pod \"network-metrics-daemon-pg6bk\" (UID: \"dca80848-6c0a-4946-980a-197e2ecfc898\") " pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:06 crc kubenswrapper[4678]: E1124 11:17:06.059356 4678 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:17:06 crc kubenswrapper[4678]: E1124 11:17:06.059450 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs podName:dca80848-6c0a-4946-980a-197e2ecfc898 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:06.559425138 +0000 UTC m=+37.490484767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs") pod "network-metrics-daemon-pg6bk" (UID: "dca80848-6c0a-4946-980a-197e2ecfc898") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.066035 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.066114 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.066133 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.066173 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.066195 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:06Z","lastTransitionTime":"2025-11-24T11:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.072378 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.084772 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjxrq\" (UniqueName: \"kubernetes.io/projected/dca80848-6c0a-4946-980a-197e2ecfc898-kube-api-access-zjxrq\") pod \"network-metrics-daemon-pg6bk\" (UID: \"dca80848-6c0a-4946-980a-197e2ecfc898\") " pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.099711 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.122298 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.144608 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:03Z\\\",\\\"message\\\":\\\"te Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:17:03.125174 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:17:03.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.160389 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.172005 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.172117 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.172146 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.172206 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.172235 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:06Z","lastTransitionTime":"2025-11-24T11:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.193996 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" event={"ID":"6b64ed0b-8ce8-48ee-bcb6-551fc853626a","Type":"ContainerStarted","Data":"63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e"} Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.194071 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" event={"ID":"6b64ed0b-8ce8-48ee-bcb6-551fc853626a","Type":"ContainerStarted","Data":"dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2"} Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.214960 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.218721 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.218763 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.218776 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.218801 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.218816 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:06Z","lastTransitionTime":"2025-11-24T11:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.231332 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: E1124 11:17:06.234626 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.239622 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.239661 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.239691 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.239712 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.239726 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:06Z","lastTransitionTime":"2025-11-24T11:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.251005 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: E1124 11:17:06.253987 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.259493 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.259557 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.259574 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.259599 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.259617 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:06Z","lastTransitionTime":"2025-11-24T11:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.272072 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: E1124 11:17:06.276597 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.281449 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.281489 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.281499 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.281519 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.281534 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:06Z","lastTransitionTime":"2025-11-24T11:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.291477 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: E1124 11:17:06.301269 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.306229 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.306204 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.306348 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.306376 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.306409 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.306434 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:06Z","lastTransitionTime":"2025-11-24T11:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.321268 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: E1124 11:17:06.325823 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: E1124 11:17:06.326102 4678 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.328349 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.328419 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.328448 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.328479 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.328502 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:06Z","lastTransitionTime":"2025-11-24T11:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.334075 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.345415 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.364590 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.381780 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.400138 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.417620 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.431785 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.431826 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.431839 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.431857 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.431868 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:06Z","lastTransitionTime":"2025-11-24T11:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.446417 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:03Z\\\",\\\"message\\\":\\\"te Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:17:03.125174 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:17:03.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.505477 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.529859 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:06Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.535802 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.535845 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.535859 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.535878 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.535892 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:06Z","lastTransitionTime":"2025-11-24T11:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.563716 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs\") pod \"network-metrics-daemon-pg6bk\" (UID: \"dca80848-6c0a-4946-980a-197e2ecfc898\") " pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:06 crc kubenswrapper[4678]: E1124 11:17:06.564000 4678 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:17:06 crc kubenswrapper[4678]: E1124 11:17:06.564187 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs podName:dca80848-6c0a-4946-980a-197e2ecfc898 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:07.564138021 +0000 UTC m=+38.495197710 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs") pod "network-metrics-daemon-pg6bk" (UID: "dca80848-6c0a-4946-980a-197e2ecfc898") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.639313 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.639386 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.639404 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.639433 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.639453 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:06Z","lastTransitionTime":"2025-11-24T11:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.743724 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.743813 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.743836 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.743864 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.743885 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:06Z","lastTransitionTime":"2025-11-24T11:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.846879 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.846956 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.846975 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.847000 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.847019 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:06Z","lastTransitionTime":"2025-11-24T11:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.894967 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:06 crc kubenswrapper[4678]: E1124 11:17:06.895177 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.951261 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.951321 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.951340 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.951365 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:06 crc kubenswrapper[4678]: I1124 11:17:06.951382 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:06Z","lastTransitionTime":"2025-11-24T11:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.054744 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.054827 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.054849 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.054881 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.054902 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:07Z","lastTransitionTime":"2025-11-24T11:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.158330 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.158404 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.158427 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.158463 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.158486 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:07Z","lastTransitionTime":"2025-11-24T11:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.261535 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.261606 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.261625 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.261652 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.261695 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:07Z","lastTransitionTime":"2025-11-24T11:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.364431 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.364485 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.364504 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.364526 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.364540 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:07Z","lastTransitionTime":"2025-11-24T11:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.452491 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.453863 4678 scope.go:117] "RemoveContainer" containerID="fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535" Nov 24 11:17:07 crc kubenswrapper[4678]: E1124 11:17:07.454089 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.467884 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.467942 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.467958 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.467979 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.467992 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:07Z","lastTransitionTime":"2025-11-24T11:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.570901 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.570960 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.570974 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.570995 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.571009 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:07Z","lastTransitionTime":"2025-11-24T11:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.578392 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs\") pod \"network-metrics-daemon-pg6bk\" (UID: \"dca80848-6c0a-4946-980a-197e2ecfc898\") " pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:07 crc kubenswrapper[4678]: E1124 11:17:07.578587 4678 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:17:07 crc kubenswrapper[4678]: E1124 11:17:07.578723 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs podName:dca80848-6c0a-4946-980a-197e2ecfc898 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:09.578648465 +0000 UTC m=+40.509708134 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs") pod "network-metrics-daemon-pg6bk" (UID: "dca80848-6c0a-4946-980a-197e2ecfc898") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.673882 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.673936 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.673945 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.673963 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.673976 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:07Z","lastTransitionTime":"2025-11-24T11:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.777000 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.777055 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.777070 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.777088 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.777101 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:07Z","lastTransitionTime":"2025-11-24T11:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.880065 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.880120 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.880133 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.880152 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.880166 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:07Z","lastTransitionTime":"2025-11-24T11:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.895625 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.895718 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:07 crc kubenswrapper[4678]: E1124 11:17:07.895815 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:07 crc kubenswrapper[4678]: E1124 11:17:07.895902 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.896088 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:07 crc kubenswrapper[4678]: E1124 11:17:07.896209 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.983339 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.983395 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.983409 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.983434 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:07 crc kubenswrapper[4678]: I1124 11:17:07.983451 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:07Z","lastTransitionTime":"2025-11-24T11:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.086426 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.086510 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.086521 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.086538 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.086560 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:08Z","lastTransitionTime":"2025-11-24T11:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.193962 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.194048 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.194063 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.194087 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.194100 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:08Z","lastTransitionTime":"2025-11-24T11:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.297455 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.297520 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.297531 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.297552 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.297564 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:08Z","lastTransitionTime":"2025-11-24T11:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.401147 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.401202 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.401212 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.401230 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.401242 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:08Z","lastTransitionTime":"2025-11-24T11:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.504795 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.504887 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.504912 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.504942 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.504962 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:08Z","lastTransitionTime":"2025-11-24T11:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.609452 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.609560 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.609580 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.609637 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.609665 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:08Z","lastTransitionTime":"2025-11-24T11:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.713479 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.713902 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.714105 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.714254 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.714406 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:08Z","lastTransitionTime":"2025-11-24T11:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.818154 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.818253 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.818282 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.818323 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.818351 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:08Z","lastTransitionTime":"2025-11-24T11:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.895359 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:08 crc kubenswrapper[4678]: E1124 11:17:08.895619 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.922591 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.922801 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.922825 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.922854 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:08 crc kubenswrapper[4678]: I1124 11:17:08.922876 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:08Z","lastTransitionTime":"2025-11-24T11:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.026713 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.027059 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.027149 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.027237 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.027345 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:09Z","lastTransitionTime":"2025-11-24T11:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.131433 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.131506 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.131524 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.131552 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.131571 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:09Z","lastTransitionTime":"2025-11-24T11:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.234711 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.234815 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.234834 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.234868 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.234903 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:09Z","lastTransitionTime":"2025-11-24T11:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.338970 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.339048 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.339069 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.339099 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.339119 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:09Z","lastTransitionTime":"2025-11-24T11:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.441981 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.442043 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.442062 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.442090 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.442116 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:09Z","lastTransitionTime":"2025-11-24T11:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.445847 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.468507 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.492576 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.512381 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.530589 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.544931 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.545040 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.545064 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.545129 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.545148 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:09Z","lastTransitionTime":"2025-11-24T11:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.554137 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.576305 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.599035 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.601437 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs\") pod \"network-metrics-daemon-pg6bk\" (UID: \"dca80848-6c0a-4946-980a-197e2ecfc898\") " pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:09 crc kubenswrapper[4678]: E1124 11:17:09.601743 4678 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:17:09 crc kubenswrapper[4678]: E1124 11:17:09.601859 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs podName:dca80848-6c0a-4946-980a-197e2ecfc898 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:13.601834235 +0000 UTC m=+44.532894084 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs") pod "network-metrics-daemon-pg6bk" (UID: "dca80848-6c0a-4946-980a-197e2ecfc898") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.616372 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.635505 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.649442 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.649521 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.649543 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.649574 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.649595 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:09Z","lastTransitionTime":"2025-11-24T11:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.652938 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.665281 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.693008 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.715209 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.745495 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:03Z\\\",\\\"message\\\":\\\"te Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:17:03.125174 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:17:03.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.752742 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.752825 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.752851 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.752891 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.752914 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:09Z","lastTransitionTime":"2025-11-24T11:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.761084 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.776817 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.855881 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.856328 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.856351 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.856376 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.856395 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:09Z","lastTransitionTime":"2025-11-24T11:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.895295 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.895443 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:09 crc kubenswrapper[4678]: E1124 11:17:09.895757 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.895795 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:09 crc kubenswrapper[4678]: E1124 11:17:09.895935 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:09 crc kubenswrapper[4678]: E1124 11:17:09.896173 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.918106 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.939659 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.954269 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.959653 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.959758 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.959783 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.959815 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.959840 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:09Z","lastTransitionTime":"2025-11-24T11:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.973737 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:09 crc kubenswrapper[4678]: I1124 11:17:09.994689 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:09Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.018155 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.041472 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.062491 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.062528 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.062538 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.062553 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.062564 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:10Z","lastTransitionTime":"2025-11-24T11:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.073031 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:03Z\\\",\\\"message\\\":\\\"te Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:17:03.125174 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:17:03.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.087292 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.103901 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.126984 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.140846 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.157961 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.165535 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.165623 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.165635 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.165653 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.165680 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:10Z","lastTransitionTime":"2025-11-24T11:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.172520 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.187207 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.200541 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.268874 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.268928 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.268938 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.268957 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.268969 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:10Z","lastTransitionTime":"2025-11-24T11:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.372735 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.372795 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.372809 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.372830 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.372846 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:10Z","lastTransitionTime":"2025-11-24T11:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.476031 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.476105 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.476132 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.476165 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.476187 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:10Z","lastTransitionTime":"2025-11-24T11:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.579228 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.579324 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.579343 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.579375 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.579394 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:10Z","lastTransitionTime":"2025-11-24T11:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.682222 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.682266 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.682278 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.682296 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.682307 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:10Z","lastTransitionTime":"2025-11-24T11:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.784982 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.785026 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.785035 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.785049 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.785061 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:10Z","lastTransitionTime":"2025-11-24T11:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.887250 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.887293 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.887305 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.887327 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.887344 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:10Z","lastTransitionTime":"2025-11-24T11:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.894529 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:10 crc kubenswrapper[4678]: E1124 11:17:10.894750 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.991135 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.991220 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.991240 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.991274 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:10 crc kubenswrapper[4678]: I1124 11:17:10.991294 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:10Z","lastTransitionTime":"2025-11-24T11:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.093812 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.093853 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.093863 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.093878 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.093906 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:11Z","lastTransitionTime":"2025-11-24T11:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.196865 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.196962 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.196980 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.197005 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.197025 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:11Z","lastTransitionTime":"2025-11-24T11:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.299858 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.299978 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.299998 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.300022 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.300040 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:11Z","lastTransitionTime":"2025-11-24T11:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.402551 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.402595 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.402604 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.402620 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.402630 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:11Z","lastTransitionTime":"2025-11-24T11:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.505825 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.505894 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.505914 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.505942 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.505961 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:11Z","lastTransitionTime":"2025-11-24T11:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.609847 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.609918 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.609938 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.609969 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.609988 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:11Z","lastTransitionTime":"2025-11-24T11:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.712493 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.712571 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.712580 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.712598 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.712609 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:11Z","lastTransitionTime":"2025-11-24T11:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.815654 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.815735 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.815773 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.815794 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.815807 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:11Z","lastTransitionTime":"2025-11-24T11:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.895108 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.895153 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.895256 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:11 crc kubenswrapper[4678]: E1124 11:17:11.895447 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:11 crc kubenswrapper[4678]: E1124 11:17:11.895631 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:11 crc kubenswrapper[4678]: E1124 11:17:11.895880 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.919642 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.919822 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.919849 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.919882 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:11 crc kubenswrapper[4678]: I1124 11:17:11.919907 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:11Z","lastTransitionTime":"2025-11-24T11:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.023759 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.023832 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.023852 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.023881 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.023901 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:12Z","lastTransitionTime":"2025-11-24T11:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.127480 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.127574 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.127595 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.127624 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.127644 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:12Z","lastTransitionTime":"2025-11-24T11:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.230457 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.230510 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.230519 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.230547 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.230558 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:12Z","lastTransitionTime":"2025-11-24T11:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.333852 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.333931 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.333956 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.333987 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.334019 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:12Z","lastTransitionTime":"2025-11-24T11:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.436901 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.437004 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.437025 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.437060 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.437086 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:12Z","lastTransitionTime":"2025-11-24T11:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.540770 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.540863 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.540892 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.540932 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.540956 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:12Z","lastTransitionTime":"2025-11-24T11:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.644659 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.644757 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.644777 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.644804 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.644822 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:12Z","lastTransitionTime":"2025-11-24T11:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.748829 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.748927 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.748954 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.748989 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.749019 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:12Z","lastTransitionTime":"2025-11-24T11:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.851846 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.851893 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.851904 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.851923 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.851936 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:12Z","lastTransitionTime":"2025-11-24T11:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.895290 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:12 crc kubenswrapper[4678]: E1124 11:17:12.895504 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.955250 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.955311 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.955330 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.955355 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:12 crc kubenswrapper[4678]: I1124 11:17:12.955378 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:12Z","lastTransitionTime":"2025-11-24T11:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.059032 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.059272 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.059359 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.059432 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.059509 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:13Z","lastTransitionTime":"2025-11-24T11:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.165511 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.165625 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.165649 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.165760 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.165819 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:13Z","lastTransitionTime":"2025-11-24T11:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.269546 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.269625 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.269645 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.269720 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.269747 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:13Z","lastTransitionTime":"2025-11-24T11:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.373006 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.373070 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.373085 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.373111 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.373126 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:13Z","lastTransitionTime":"2025-11-24T11:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.476313 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.476369 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.476381 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.476400 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.476414 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:13Z","lastTransitionTime":"2025-11-24T11:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.580092 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.580148 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.580159 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.580178 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.580191 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:13Z","lastTransitionTime":"2025-11-24T11:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.650325 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs\") pod \"network-metrics-daemon-pg6bk\" (UID: \"dca80848-6c0a-4946-980a-197e2ecfc898\") " pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:13 crc kubenswrapper[4678]: E1124 11:17:13.650483 4678 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:17:13 crc kubenswrapper[4678]: E1124 11:17:13.650550 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs podName:dca80848-6c0a-4946-980a-197e2ecfc898 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:21.650531952 +0000 UTC m=+52.581591591 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs") pod "network-metrics-daemon-pg6bk" (UID: "dca80848-6c0a-4946-980a-197e2ecfc898") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.683101 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.683459 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.683534 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.683646 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.683749 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:13Z","lastTransitionTime":"2025-11-24T11:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.787229 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.787300 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.787318 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.787346 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.787365 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:13Z","lastTransitionTime":"2025-11-24T11:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.890980 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.891046 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.891055 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.891072 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.891082 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:13Z","lastTransitionTime":"2025-11-24T11:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.895632 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.895632 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:13 crc kubenswrapper[4678]: E1124 11:17:13.895839 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:13 crc kubenswrapper[4678]: E1124 11:17:13.895909 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.895648 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:13 crc kubenswrapper[4678]: E1124 11:17:13.895976 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.993899 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.993980 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.993998 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.994027 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:13 crc kubenswrapper[4678]: I1124 11:17:13.994048 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:13Z","lastTransitionTime":"2025-11-24T11:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.097434 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.097537 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.097557 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.097584 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.097606 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:14Z","lastTransitionTime":"2025-11-24T11:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.200731 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.200844 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.200862 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.200888 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.200911 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:14Z","lastTransitionTime":"2025-11-24T11:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.304512 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.304562 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.304575 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.304594 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.304606 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:14Z","lastTransitionTime":"2025-11-24T11:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.408446 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.408518 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.408541 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.408571 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.408591 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:14Z","lastTransitionTime":"2025-11-24T11:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.512086 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.512175 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.512188 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.512229 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.512249 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:14Z","lastTransitionTime":"2025-11-24T11:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.615901 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.615950 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.615960 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.615982 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.615993 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:14Z","lastTransitionTime":"2025-11-24T11:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.718963 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.719038 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.719066 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.719103 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.719130 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:14Z","lastTransitionTime":"2025-11-24T11:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.823117 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.823186 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.823211 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.823243 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.823267 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:14Z","lastTransitionTime":"2025-11-24T11:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.895159 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:14 crc kubenswrapper[4678]: E1124 11:17:14.895349 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.925548 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.925608 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.925634 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.925708 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:14 crc kubenswrapper[4678]: I1124 11:17:14.925732 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:14Z","lastTransitionTime":"2025-11-24T11:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.029189 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.029254 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.029269 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.029289 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.029306 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:15Z","lastTransitionTime":"2025-11-24T11:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.132480 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.132532 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.132548 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.132596 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.132614 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:15Z","lastTransitionTime":"2025-11-24T11:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.235480 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.235543 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.235562 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.235592 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.235619 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:15Z","lastTransitionTime":"2025-11-24T11:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.338752 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.338790 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.338800 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.338815 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.338824 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:15Z","lastTransitionTime":"2025-11-24T11:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.441841 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.441889 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.441898 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.441917 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.441929 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:15Z","lastTransitionTime":"2025-11-24T11:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.544741 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.544782 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.544790 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.544805 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.544815 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:15Z","lastTransitionTime":"2025-11-24T11:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.647467 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.647529 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.647540 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.647557 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.647569 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:15Z","lastTransitionTime":"2025-11-24T11:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.750315 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.750363 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.750374 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.750392 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.750405 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:15Z","lastTransitionTime":"2025-11-24T11:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.852961 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.853004 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.853013 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.853028 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.853038 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:15Z","lastTransitionTime":"2025-11-24T11:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.894724 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.894818 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:15 crc kubenswrapper[4678]: E1124 11:17:15.894863 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.894870 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:15 crc kubenswrapper[4678]: E1124 11:17:15.894972 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:15 crc kubenswrapper[4678]: E1124 11:17:15.895079 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.956244 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.956292 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.956301 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.956318 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:15 crc kubenswrapper[4678]: I1124 11:17:15.956330 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:15Z","lastTransitionTime":"2025-11-24T11:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.059431 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.059483 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.059495 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.059517 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.059535 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:16Z","lastTransitionTime":"2025-11-24T11:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.162507 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.162552 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.162562 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.162581 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.162592 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:16Z","lastTransitionTime":"2025-11-24T11:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.265789 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.265835 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.265844 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.265859 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.265871 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:16Z","lastTransitionTime":"2025-11-24T11:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.368761 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.368816 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.368829 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.368850 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.368861 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:16Z","lastTransitionTime":"2025-11-24T11:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.438271 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.438333 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.438349 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.438372 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.438389 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:16Z","lastTransitionTime":"2025-11-24T11:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:16 crc kubenswrapper[4678]: E1124 11:17:16.450500 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.454894 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.454924 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.454933 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.454949 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.454960 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:16Z","lastTransitionTime":"2025-11-24T11:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:16 crc kubenswrapper[4678]: E1124 11:17:16.469034 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.473932 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.473996 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.474016 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.474040 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.474059 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:16Z","lastTransitionTime":"2025-11-24T11:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:16 crc kubenswrapper[4678]: E1124 11:17:16.488051 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.492281 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.492362 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.492379 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.492405 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.492424 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:16Z","lastTransitionTime":"2025-11-24T11:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:16 crc kubenswrapper[4678]: E1124 11:17:16.507246 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.511324 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.511370 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.511385 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.511403 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.511416 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:16Z","lastTransitionTime":"2025-11-24T11:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:16 crc kubenswrapper[4678]: E1124 11:17:16.525632 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:16 crc kubenswrapper[4678]: E1124 11:17:16.525806 4678 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.527767 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.527823 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.527841 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.527867 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.527885 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:16Z","lastTransitionTime":"2025-11-24T11:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.631038 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.631092 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.631101 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.631118 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.631130 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:16Z","lastTransitionTime":"2025-11-24T11:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.734074 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.734138 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.734152 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.734174 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.734196 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:16Z","lastTransitionTime":"2025-11-24T11:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.837221 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.837546 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.837622 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.837723 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.837803 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:16Z","lastTransitionTime":"2025-11-24T11:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.894744 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:16 crc kubenswrapper[4678]: E1124 11:17:16.894913 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.942010 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.942080 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.942098 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.942168 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:16 crc kubenswrapper[4678]: I1124 11:17:16.942191 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:16Z","lastTransitionTime":"2025-11-24T11:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.045407 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.045473 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.045487 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.045510 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.045524 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:17Z","lastTransitionTime":"2025-11-24T11:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.148001 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.148052 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.148098 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.148117 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.148129 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:17Z","lastTransitionTime":"2025-11-24T11:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.250784 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.250846 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.250857 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.250876 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.250888 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:17Z","lastTransitionTime":"2025-11-24T11:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.353898 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.353947 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.353962 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.353981 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.353994 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:17Z","lastTransitionTime":"2025-11-24T11:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.457341 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.457405 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.457418 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.457439 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.457453 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:17Z","lastTransitionTime":"2025-11-24T11:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.560470 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.560536 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.560558 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.560592 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.560614 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:17Z","lastTransitionTime":"2025-11-24T11:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.663905 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.664088 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.664114 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.664140 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.664158 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:17Z","lastTransitionTime":"2025-11-24T11:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.767058 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.767109 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.767122 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.767148 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.767165 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:17Z","lastTransitionTime":"2025-11-24T11:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.870564 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.870625 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.870640 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.870658 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.870694 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:17Z","lastTransitionTime":"2025-11-24T11:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.895020 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:17 crc kubenswrapper[4678]: E1124 11:17:17.895282 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.896038 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:17 crc kubenswrapper[4678]: E1124 11:17:17.896176 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.896050 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:17 crc kubenswrapper[4678]: E1124 11:17:17.896270 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.973371 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.973436 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.973452 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.973469 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:17 crc kubenswrapper[4678]: I1124 11:17:17.973481 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:17Z","lastTransitionTime":"2025-11-24T11:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.076199 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.076244 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.076256 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.076273 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.076285 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:18Z","lastTransitionTime":"2025-11-24T11:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.179229 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.179297 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.179317 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.179342 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.179363 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:18Z","lastTransitionTime":"2025-11-24T11:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.282783 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.282857 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.282881 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.282915 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.282940 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:18Z","lastTransitionTime":"2025-11-24T11:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.386050 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.386121 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.386139 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.386166 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.386188 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:18Z","lastTransitionTime":"2025-11-24T11:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.489651 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.489764 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.489785 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.489812 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.489831 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:18Z","lastTransitionTime":"2025-11-24T11:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.593046 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.593106 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.593142 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.593173 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.593197 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:18Z","lastTransitionTime":"2025-11-24T11:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.696519 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.696564 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.696574 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.696591 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.696602 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:18Z","lastTransitionTime":"2025-11-24T11:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.799470 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.799563 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.799586 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.799620 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.799639 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:18Z","lastTransitionTime":"2025-11-24T11:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.894991 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:18 crc kubenswrapper[4678]: E1124 11:17:18.895140 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.902568 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.902615 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.902627 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.902642 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:18 crc kubenswrapper[4678]: I1124 11:17:18.902657 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:18Z","lastTransitionTime":"2025-11-24T11:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.006562 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.006623 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.006637 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.006657 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.006692 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:19Z","lastTransitionTime":"2025-11-24T11:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.110054 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.110125 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.110145 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.110170 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.110190 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:19Z","lastTransitionTime":"2025-11-24T11:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.214191 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.214251 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.214269 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.214296 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.214315 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:19Z","lastTransitionTime":"2025-11-24T11:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.317377 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.317428 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.317444 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.317465 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.317478 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:19Z","lastTransitionTime":"2025-11-24T11:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.420624 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.420760 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.420778 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.420813 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.420834 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:19Z","lastTransitionTime":"2025-11-24T11:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.524351 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.524430 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.524449 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.524478 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.524498 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:19Z","lastTransitionTime":"2025-11-24T11:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.627568 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.627729 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.627755 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.627786 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.627809 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:19Z","lastTransitionTime":"2025-11-24T11:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.730870 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.730947 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.730968 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.730996 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.731019 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:19Z","lastTransitionTime":"2025-11-24T11:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.834008 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.834082 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.834094 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.834114 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.834127 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:19Z","lastTransitionTime":"2025-11-24T11:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.895086 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.895148 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.895085 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:19 crc kubenswrapper[4678]: E1124 11:17:19.895428 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:19 crc kubenswrapper[4678]: E1124 11:17:19.896325 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:19 crc kubenswrapper[4678]: E1124 11:17:19.896443 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.896917 4678 scope.go:117] "RemoveContainer" containerID="fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.913650 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.928376 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.936852 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.936898 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.936910 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.936932 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.936950 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:19Z","lastTransitionTime":"2025-11-24T11:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.952542 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.976180 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:19 crc kubenswrapper[4678]: I1124 11:17:19.999136 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.017146 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.036237 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.039466 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.039532 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.039547 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.039573 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.039589 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:20Z","lastTransitionTime":"2025-11-24T11:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.056608 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.073305 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.092299 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.126570 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.142661 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.142749 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.142767 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.142793 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.142820 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:20Z","lastTransitionTime":"2025-11-24T11:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.155659 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.175559 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.196497 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:03Z\\\",\\\"message\\\":\\\"te Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:17:03.125174 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:17:03.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.207276 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.223534 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.245479 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.245564 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.245587 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.245616 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.245634 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:20Z","lastTransitionTime":"2025-11-24T11:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.254350 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovnkube-controller/1.log" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.256772 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerStarted","Data":"5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9"} Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.257281 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.277589 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.294575 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.309211 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.323926 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.337811 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.348799 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.348865 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.348882 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.348909 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.348926 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:20Z","lastTransitionTime":"2025-11-24T11:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.350817 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.364725 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.379332 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.391794 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.412352 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.432393 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.452045 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.452090 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.452102 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.452122 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.452136 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:20Z","lastTransitionTime":"2025-11-24T11:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.454842 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.470143 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.488572 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.515346 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:03Z\\\",\\\"message\\\":\\\"te Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:17:03.125174 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:17:03.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.531282 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.555511 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.555558 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.555567 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.555584 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.555595 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:20Z","lastTransitionTime":"2025-11-24T11:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.658365 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.658440 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.658451 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.658467 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.658490 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:20Z","lastTransitionTime":"2025-11-24T11:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.761826 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.761880 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.761890 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.761908 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.761920 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:20Z","lastTransitionTime":"2025-11-24T11:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.887289 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.887423 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.887465 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.887503 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.887526 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:20Z","lastTransitionTime":"2025-11-24T11:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.895260 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:20 crc kubenswrapper[4678]: E1124 11:17:20.895508 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.990942 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.991008 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.991021 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.991043 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:20 crc kubenswrapper[4678]: I1124 11:17:20.991059 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:20Z","lastTransitionTime":"2025-11-24T11:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.093881 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.093948 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.093969 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.093997 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.094020 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:21Z","lastTransitionTime":"2025-11-24T11:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.197390 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.197460 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.197486 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.197522 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.197546 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:21Z","lastTransitionTime":"2025-11-24T11:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.264002 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovnkube-controller/2.log" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.265213 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovnkube-controller/1.log" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.270553 4678 generic.go:334] "Generic (PLEG): container finished" podID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerID="5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9" exitCode=1 Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.270625 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerDied","Data":"5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9"} Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.270749 4678 scope.go:117] "RemoveContainer" containerID="fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.272497 4678 scope.go:117] "RemoveContainer" containerID="5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9" Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.272845 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.295773 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.301477 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.301528 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.301547 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.301571 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.301591 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:21Z","lastTransitionTime":"2025-11-24T11:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.312442 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.338859 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.340168 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.340393 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:17:53.340340182 +0000 UTC m=+84.271399871 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.340496 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.340654 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.340759 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.340746 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.340845 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.340828 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.340999 4678 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.341086 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:53.341063332 +0000 UTC m=+84.272123001 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.341581 4678 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.341860 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:53.341837664 +0000 UTC m=+84.272897333 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.341907 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.342190 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.342336 4678 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.342404 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:53.34238844 +0000 UTC m=+84.273448109 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.341750 4678 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.342485 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:53.342472442 +0000 UTC m=+84.273532111 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.360979 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.380999 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.398257 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.405786 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.406125 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.406267 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.406436 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.406584 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:21Z","lastTransitionTime":"2025-11-24T11:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.413520 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.444084 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fec95afac95f841a657b669ad792d57fb6cf0f3851777e1bbc03fb05f400d535\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:03Z\\\",\\\"message\\\":\\\"te Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1124 11:17:03.125174 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:03Z is after 2025-08-24T17:21:41Z]\\\\nI1124 11:17:03.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:21Z\\\",\\\"message\\\":\\\"60\\\\nI1124 11:17:21.013817 6389 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:17:21.014704 6389 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:17:21.014784 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:17:21.014833 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:17:21.015147 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:17:21.015189 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:17:21.015197 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:17:21.015229 6389 factory.go:656] Stopping watch factory\\\\nI1124 11:17:21.015318 6389 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:17:21.015397 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:17:21.015417 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:17:21.015428 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:17:21.015438 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:17:21.015448 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.459966 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.479932 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.496121 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.509915 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.509960 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.509976 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.510005 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.510024 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:21Z","lastTransitionTime":"2025-11-24T11:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.511507 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.534821 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.554017 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.573862 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.590759 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.613556 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.613653 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.613710 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.613740 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.613758 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:21Z","lastTransitionTime":"2025-11-24T11:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.717255 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.717323 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.717341 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.717366 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.717385 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:21Z","lastTransitionTime":"2025-11-24T11:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.745429 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs\") pod \"network-metrics-daemon-pg6bk\" (UID: \"dca80848-6c0a-4946-980a-197e2ecfc898\") " pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.746149 4678 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.746466 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs podName:dca80848-6c0a-4946-980a-197e2ecfc898 nodeName:}" failed. No retries permitted until 2025-11-24 11:17:37.746412641 +0000 UTC m=+68.677472340 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs") pod "network-metrics-daemon-pg6bk" (UID: "dca80848-6c0a-4946-980a-197e2ecfc898") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.820818 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.821969 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.822013 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.822049 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.822072 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:21Z","lastTransitionTime":"2025-11-24T11:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.895023 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.895084 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.895139 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.895218 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.895375 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:21 crc kubenswrapper[4678]: E1124 11:17:21.895520 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.924961 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.925018 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.925030 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.925052 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:21 crc kubenswrapper[4678]: I1124 11:17:21.925068 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:21Z","lastTransitionTime":"2025-11-24T11:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.029426 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.029510 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.029535 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.029568 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.029594 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:22Z","lastTransitionTime":"2025-11-24T11:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.133391 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.133447 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.133462 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.133481 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.133495 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:22Z","lastTransitionTime":"2025-11-24T11:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.237100 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.237167 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.237184 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.237206 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.237222 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:22Z","lastTransitionTime":"2025-11-24T11:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.276513 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovnkube-controller/2.log" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.282029 4678 scope.go:117] "RemoveContainer" containerID="5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9" Nov 24 11:17:22 crc kubenswrapper[4678]: E1124 11:17:22.284436 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.311334 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.330724 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.339971 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.340027 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.340044 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.340070 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.340088 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:22Z","lastTransitionTime":"2025-11-24T11:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.349951 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.366470 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.384435 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.406009 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.427598 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.443080 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.443158 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.443176 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.443204 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.443224 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:22Z","lastTransitionTime":"2025-11-24T11:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.454054 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.480037 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.503220 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.546405 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.546923 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.547097 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.547246 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.547355 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:22Z","lastTransitionTime":"2025-11-24T11:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.557563 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.584899 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.602822 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.621100 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.645440 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:21Z\\\",\\\"message\\\":\\\"60\\\\nI1124 11:17:21.013817 6389 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:17:21.014704 6389 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:17:21.014784 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:17:21.014833 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:17:21.015147 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:17:21.015189 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:17:21.015197 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:17:21.015229 6389 factory.go:656] Stopping watch factory\\\\nI1124 11:17:21.015318 6389 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:17:21.015397 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:17:21.015417 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:17:21.015428 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:17:21.015438 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:17:21.015448 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.650512 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.650646 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.650723 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.650815 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.650879 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:22Z","lastTransitionTime":"2025-11-24T11:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.660330 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.753740 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.753798 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.753810 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.753825 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.753837 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:22Z","lastTransitionTime":"2025-11-24T11:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.857130 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.858130 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.858198 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.858233 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.858260 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:22Z","lastTransitionTime":"2025-11-24T11:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.894882 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:22 crc kubenswrapper[4678]: E1124 11:17:22.895072 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.961902 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.961992 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.962003 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.962023 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:22 crc kubenswrapper[4678]: I1124 11:17:22.962035 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:22Z","lastTransitionTime":"2025-11-24T11:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.065606 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.065687 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.065704 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.065722 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.065757 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:23Z","lastTransitionTime":"2025-11-24T11:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.168764 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.168841 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.168859 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.168889 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.168908 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:23Z","lastTransitionTime":"2025-11-24T11:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.271939 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.272271 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.272382 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.272484 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.272551 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:23Z","lastTransitionTime":"2025-11-24T11:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.375262 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.375325 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.375344 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.375369 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.375387 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:23Z","lastTransitionTime":"2025-11-24T11:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.477975 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.478037 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.478054 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.478080 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.478099 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:23Z","lastTransitionTime":"2025-11-24T11:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.581203 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.581263 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.581277 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.581296 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.581308 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:23Z","lastTransitionTime":"2025-11-24T11:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.684655 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.684719 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.684730 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.684746 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.684757 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:23Z","lastTransitionTime":"2025-11-24T11:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.787280 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.787366 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.787391 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.787424 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.787443 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:23Z","lastTransitionTime":"2025-11-24T11:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.890579 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.890698 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.890716 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.890738 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.890752 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:23Z","lastTransitionTime":"2025-11-24T11:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.896027 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.896056 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.896091 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:23 crc kubenswrapper[4678]: E1124 11:17:23.896146 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:23 crc kubenswrapper[4678]: E1124 11:17:23.896433 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:23 crc kubenswrapper[4678]: E1124 11:17:23.896523 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.994125 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.994202 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.994219 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.994247 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:23 crc kubenswrapper[4678]: I1124 11:17:23.994265 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:23Z","lastTransitionTime":"2025-11-24T11:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.097445 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.097557 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.097579 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.097616 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.097640 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:24Z","lastTransitionTime":"2025-11-24T11:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.201020 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.201106 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.201131 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.201169 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.201195 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:24Z","lastTransitionTime":"2025-11-24T11:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.304052 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.304115 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.304134 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.304161 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.304178 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:24Z","lastTransitionTime":"2025-11-24T11:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.407909 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.407986 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.408011 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.408038 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.408060 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:24Z","lastTransitionTime":"2025-11-24T11:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.511982 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.512092 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.512115 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.512143 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.512160 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:24Z","lastTransitionTime":"2025-11-24T11:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.614892 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.614941 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.614952 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.614971 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.614983 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:24Z","lastTransitionTime":"2025-11-24T11:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.718123 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.718210 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.718228 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.718257 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.718276 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:24Z","lastTransitionTime":"2025-11-24T11:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.820947 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.821044 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.821067 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.821106 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.821131 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:24Z","lastTransitionTime":"2025-11-24T11:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.895523 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:24 crc kubenswrapper[4678]: E1124 11:17:24.895773 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.924833 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.924922 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.924953 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.924984 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:24 crc kubenswrapper[4678]: I1124 11:17:24.925004 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:24Z","lastTransitionTime":"2025-11-24T11:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.028664 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.028816 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.028835 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.028863 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.028881 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:25Z","lastTransitionTime":"2025-11-24T11:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.132653 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.132770 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.132796 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.132837 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.132879 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:25Z","lastTransitionTime":"2025-11-24T11:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.236793 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.236851 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.236870 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.236896 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.236910 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:25Z","lastTransitionTime":"2025-11-24T11:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.340739 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.341260 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.341428 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.341597 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.341804 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:25Z","lastTransitionTime":"2025-11-24T11:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.445642 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.445748 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.445766 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.445792 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.445814 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:25Z","lastTransitionTime":"2025-11-24T11:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.549197 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.549747 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.549946 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.550158 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.550409 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:25Z","lastTransitionTime":"2025-11-24T11:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.653816 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.653875 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.653896 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.653924 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.653941 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:25Z","lastTransitionTime":"2025-11-24T11:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.757836 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.757911 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.757957 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.757990 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.758014 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:25Z","lastTransitionTime":"2025-11-24T11:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.861053 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.861348 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.861450 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.861613 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.861728 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:25Z","lastTransitionTime":"2025-11-24T11:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.895056 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.895334 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.895126 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:25 crc kubenswrapper[4678]: E1124 11:17:25.895523 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:25 crc kubenswrapper[4678]: E1124 11:17:25.895575 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:25 crc kubenswrapper[4678]: E1124 11:17:25.895748 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.965341 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.965472 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.965588 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.965715 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:25 crc kubenswrapper[4678]: I1124 11:17:25.965770 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:25Z","lastTransitionTime":"2025-11-24T11:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.070817 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.070876 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.070897 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.070924 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.070945 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:26Z","lastTransitionTime":"2025-11-24T11:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.173550 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.173592 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.173604 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.173623 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.173637 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:26Z","lastTransitionTime":"2025-11-24T11:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.276760 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.276828 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.276853 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.276882 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.276900 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:26Z","lastTransitionTime":"2025-11-24T11:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.380404 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.380461 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.380473 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.380492 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.380505 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:26Z","lastTransitionTime":"2025-11-24T11:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.484549 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.484601 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.484611 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.484629 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.484644 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:26Z","lastTransitionTime":"2025-11-24T11:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.587884 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.587929 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.587939 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.587957 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.587970 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:26Z","lastTransitionTime":"2025-11-24T11:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.691302 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.691368 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.691386 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.691409 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.691429 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:26Z","lastTransitionTime":"2025-11-24T11:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.719944 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.720015 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.720032 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.720056 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.720072 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:26Z","lastTransitionTime":"2025-11-24T11:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:26 crc kubenswrapper[4678]: E1124 11:17:26.747298 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.754306 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.754379 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.754403 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.754436 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.754461 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:26Z","lastTransitionTime":"2025-11-24T11:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:26 crc kubenswrapper[4678]: E1124 11:17:26.778802 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.786285 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.786420 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.786479 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.786511 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.786570 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:26Z","lastTransitionTime":"2025-11-24T11:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:26 crc kubenswrapper[4678]: E1124 11:17:26.811983 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.819603 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.819709 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.819734 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.819770 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.819791 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:26Z","lastTransitionTime":"2025-11-24T11:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:26 crc kubenswrapper[4678]: E1124 11:17:26.844020 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.850706 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.850772 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.850787 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.850805 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.850818 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:26Z","lastTransitionTime":"2025-11-24T11:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:26 crc kubenswrapper[4678]: E1124 11:17:26.871996 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:26 crc kubenswrapper[4678]: E1124 11:17:26.872188 4678 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.874510 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.874556 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.874569 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.874591 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.874605 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:26Z","lastTransitionTime":"2025-11-24T11:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.894863 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:26 crc kubenswrapper[4678]: E1124 11:17:26.895035 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.977909 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.977976 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.977989 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.978013 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.978032 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:26Z","lastTransitionTime":"2025-11-24T11:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:26 crc kubenswrapper[4678]: I1124 11:17:26.992029 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.008292 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.014434 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.034770 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.056222 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.076222 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.081752 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.081807 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.081823 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.081847 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.081899 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:27Z","lastTransitionTime":"2025-11-24T11:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.097082 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.117210 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.137581 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.155366 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.177740 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.184746 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.184830 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.184856 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.184888 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.184914 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:27Z","lastTransitionTime":"2025-11-24T11:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.200059 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.225379 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.244707 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.293648 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.293796 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.293862 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.293918 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.293945 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:27Z","lastTransitionTime":"2025-11-24T11:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.294222 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:21Z\\\",\\\"message\\\":\\\"60\\\\nI1124 11:17:21.013817 6389 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:17:21.014704 6389 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:17:21.014784 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:17:21.014833 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:17:21.015147 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:17:21.015189 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:17:21.015197 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:17:21.015229 6389 factory.go:656] Stopping watch factory\\\\nI1124 11:17:21.015318 6389 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:17:21.015397 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:17:21.015417 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:17:21.015428 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:17:21.015438 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:17:21.015448 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.314081 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.327835 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.342303 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.397371 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.397757 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.397953 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.398079 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.398187 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:27Z","lastTransitionTime":"2025-11-24T11:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.501556 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.502125 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.502293 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.502428 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.502557 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:27Z","lastTransitionTime":"2025-11-24T11:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.606335 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.606806 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.607035 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.607272 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.607437 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:27Z","lastTransitionTime":"2025-11-24T11:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.711188 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.711261 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.711283 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.711312 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.711338 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:27Z","lastTransitionTime":"2025-11-24T11:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.814855 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.814932 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.814961 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.814992 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.815017 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:27Z","lastTransitionTime":"2025-11-24T11:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.895434 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.895503 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.895561 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:27 crc kubenswrapper[4678]: E1124 11:17:27.895738 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:27 crc kubenswrapper[4678]: E1124 11:17:27.895984 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:27 crc kubenswrapper[4678]: E1124 11:17:27.896146 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.918811 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.918865 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.918888 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.918913 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:27 crc kubenswrapper[4678]: I1124 11:17:27.918933 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:27Z","lastTransitionTime":"2025-11-24T11:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.022548 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.022612 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.022629 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.022657 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.022725 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:28Z","lastTransitionTime":"2025-11-24T11:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.126286 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.126367 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.126380 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.126404 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.126420 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:28Z","lastTransitionTime":"2025-11-24T11:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.230259 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.230333 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.230355 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.230386 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.230408 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:28Z","lastTransitionTime":"2025-11-24T11:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.333369 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.333425 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.333434 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.333451 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.333465 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:28Z","lastTransitionTime":"2025-11-24T11:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.436223 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.436289 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.436310 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.436335 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.436355 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:28Z","lastTransitionTime":"2025-11-24T11:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.539942 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.540009 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.540026 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.540050 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.540069 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:28Z","lastTransitionTime":"2025-11-24T11:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.643256 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.643324 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.643344 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.643369 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.643390 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:28Z","lastTransitionTime":"2025-11-24T11:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.746803 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.746900 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.746922 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.746951 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.746971 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:28Z","lastTransitionTime":"2025-11-24T11:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.855010 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.855070 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.855087 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.855111 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.855128 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:28Z","lastTransitionTime":"2025-11-24T11:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.895217 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:28 crc kubenswrapper[4678]: E1124 11:17:28.895390 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.958437 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.958491 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.958508 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.958531 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:28 crc kubenswrapper[4678]: I1124 11:17:28.958549 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:28Z","lastTransitionTime":"2025-11-24T11:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.062321 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.062420 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.062442 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.062474 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.062495 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:29Z","lastTransitionTime":"2025-11-24T11:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.166525 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.166597 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.166610 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.166632 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.166649 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:29Z","lastTransitionTime":"2025-11-24T11:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.270152 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.270241 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.270306 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.270340 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.270369 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:29Z","lastTransitionTime":"2025-11-24T11:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.374391 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.374972 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.375110 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.375262 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.375631 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:29Z","lastTransitionTime":"2025-11-24T11:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.480478 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.480551 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.480569 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.480601 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.480623 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:29Z","lastTransitionTime":"2025-11-24T11:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.583983 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.584399 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.584531 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.584659 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.584898 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:29Z","lastTransitionTime":"2025-11-24T11:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.689094 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.689139 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.689154 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.689173 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.689186 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:29Z","lastTransitionTime":"2025-11-24T11:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.794381 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.794464 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.794486 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.794518 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.794548 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:29Z","lastTransitionTime":"2025-11-24T11:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.894659 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.894659 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:29 crc kubenswrapper[4678]: E1124 11:17:29.895290 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.894828 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:29 crc kubenswrapper[4678]: E1124 11:17:29.895385 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:29 crc kubenswrapper[4678]: E1124 11:17:29.895614 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.897423 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.897485 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.897501 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.897528 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.897547 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:29Z","lastTransitionTime":"2025-11-24T11:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.922985 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.943136 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.962025 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:29 crc kubenswrapper[4678]: I1124 11:17:29.984489 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:29Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.000029 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.000079 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.000097 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.000120 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.000135 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:30Z","lastTransitionTime":"2025-11-24T11:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.003502 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:30Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.023997 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c10b0c-04de-4b5b-b189-f778a0568443\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f003b0cfebb220e52792a5c28177053e295937e8fbd289da58977ba41c1d6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44115f400ac4e25614d1c5c574fa5ff30b17375cab9d21a0deffbbb1d537a485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa6b6c8b246f233d00d8ab09e894ced7543605acce05cf29502d4a44b959feed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:30Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.045875 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:30Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.069036 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:30Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.093049 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:21Z\\\",\\\"message\\\":\\\"60\\\\nI1124 11:17:21.013817 6389 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:17:21.014704 6389 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:17:21.014784 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:17:21.014833 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:17:21.015147 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:17:21.015189 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:17:21.015197 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:17:21.015229 6389 factory.go:656] Stopping watch factory\\\\nI1124 11:17:21.015318 6389 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:17:21.015397 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:17:21.015417 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:17:21.015428 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:17:21.015438 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:17:21.015448 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:30Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.103416 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.103481 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.103495 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.103517 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.103533 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:30Z","lastTransitionTime":"2025-11-24T11:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.110113 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:30Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.128098 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:30Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.144026 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:30Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.157989 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:30Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.175632 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:30Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.193789 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:30Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.205735 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.206072 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.206168 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.206248 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.206314 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:30Z","lastTransitionTime":"2025-11-24T11:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.209882 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:30Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.225630 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:30Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.308836 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.308872 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.308883 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.308897 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.308906 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:30Z","lastTransitionTime":"2025-11-24T11:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.412007 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.412047 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.412058 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.412077 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.412089 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:30Z","lastTransitionTime":"2025-11-24T11:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.515974 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.516042 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.516063 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.516088 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.516105 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:30Z","lastTransitionTime":"2025-11-24T11:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.619919 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.620336 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.620521 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.620739 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.621143 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:30Z","lastTransitionTime":"2025-11-24T11:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.725211 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.725271 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.725289 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.725317 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.725335 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:30Z","lastTransitionTime":"2025-11-24T11:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.828407 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.828493 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.828515 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.828550 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.828570 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:30Z","lastTransitionTime":"2025-11-24T11:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.895212 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:30 crc kubenswrapper[4678]: E1124 11:17:30.895658 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.931636 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.931749 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.931776 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.931807 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:30 crc kubenswrapper[4678]: I1124 11:17:30.931825 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:30Z","lastTransitionTime":"2025-11-24T11:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.035821 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.035893 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.035913 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.035939 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.035956 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:31Z","lastTransitionTime":"2025-11-24T11:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.138822 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.138880 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.138900 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.138928 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.138945 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:31Z","lastTransitionTime":"2025-11-24T11:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.242085 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.242155 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.242174 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.242204 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.242222 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:31Z","lastTransitionTime":"2025-11-24T11:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.345620 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.345718 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.345740 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.345760 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.345776 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:31Z","lastTransitionTime":"2025-11-24T11:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.448738 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.448829 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.448848 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.448872 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.448890 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:31Z","lastTransitionTime":"2025-11-24T11:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.551904 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.551954 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.551965 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.551988 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.552000 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:31Z","lastTransitionTime":"2025-11-24T11:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.655095 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.655153 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.655165 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.655187 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.655207 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:31Z","lastTransitionTime":"2025-11-24T11:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.759721 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.759857 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.759880 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.759906 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.759924 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:31Z","lastTransitionTime":"2025-11-24T11:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.863428 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.863871 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.864000 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.864103 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.864227 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:31Z","lastTransitionTime":"2025-11-24T11:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.895087 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.895153 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:31 crc kubenswrapper[4678]: E1124 11:17:31.895305 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:31 crc kubenswrapper[4678]: E1124 11:17:31.895517 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.895661 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:31 crc kubenswrapper[4678]: E1124 11:17:31.895999 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.973389 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.973440 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.973450 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.973468 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:31 crc kubenswrapper[4678]: I1124 11:17:31.973479 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:31Z","lastTransitionTime":"2025-11-24T11:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.075919 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.075998 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.076019 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.076051 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.076072 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:32Z","lastTransitionTime":"2025-11-24T11:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.179189 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.179240 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.179255 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.179277 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.179289 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:32Z","lastTransitionTime":"2025-11-24T11:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.282352 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.282444 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.282469 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.282499 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.282519 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:32Z","lastTransitionTime":"2025-11-24T11:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.385063 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.385114 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.385127 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.385148 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.385164 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:32Z","lastTransitionTime":"2025-11-24T11:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.488161 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.488223 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.488237 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.488263 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.488277 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:32Z","lastTransitionTime":"2025-11-24T11:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.591606 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.591718 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.591748 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.591782 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.591808 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:32Z","lastTransitionTime":"2025-11-24T11:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.695760 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.695806 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.695815 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.695833 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.695846 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:32Z","lastTransitionTime":"2025-11-24T11:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.799101 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.799141 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.799152 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.799169 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.799181 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:32Z","lastTransitionTime":"2025-11-24T11:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.895694 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:32 crc kubenswrapper[4678]: E1124 11:17:32.895916 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.902096 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.902257 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.902389 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.902495 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:32 crc kubenswrapper[4678]: I1124 11:17:32.902568 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:32Z","lastTransitionTime":"2025-11-24T11:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.005838 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.005887 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.005906 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.005933 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.005952 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:33Z","lastTransitionTime":"2025-11-24T11:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.109806 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.110269 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.110607 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.110781 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.110915 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:33Z","lastTransitionTime":"2025-11-24T11:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.214489 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.214572 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.214597 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.214633 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.214662 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:33Z","lastTransitionTime":"2025-11-24T11:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.318379 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.318432 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.318444 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.318467 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.318481 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:33Z","lastTransitionTime":"2025-11-24T11:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.421658 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.422089 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.422295 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.422512 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.422748 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:33Z","lastTransitionTime":"2025-11-24T11:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.526607 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.526714 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.526728 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.526755 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.526772 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:33Z","lastTransitionTime":"2025-11-24T11:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.630089 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.630136 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.630149 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.630170 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.630184 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:33Z","lastTransitionTime":"2025-11-24T11:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.733843 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.734291 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.734619 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.734833 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.734992 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:33Z","lastTransitionTime":"2025-11-24T11:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.839092 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.839175 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.839203 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.839234 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.839258 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:33Z","lastTransitionTime":"2025-11-24T11:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.894744 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.894804 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.894755 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:33 crc kubenswrapper[4678]: E1124 11:17:33.895032 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:33 crc kubenswrapper[4678]: E1124 11:17:33.894925 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:33 crc kubenswrapper[4678]: E1124 11:17:33.895422 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.943696 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.943760 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.943774 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.943794 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:33 crc kubenswrapper[4678]: I1124 11:17:33.943810 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:33Z","lastTransitionTime":"2025-11-24T11:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.047198 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.047287 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.047305 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.047342 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.047363 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:34Z","lastTransitionTime":"2025-11-24T11:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.150840 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.150923 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.150947 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.150983 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.151006 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:34Z","lastTransitionTime":"2025-11-24T11:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.261957 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.262036 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.262050 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.262071 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.262090 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:34Z","lastTransitionTime":"2025-11-24T11:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.365661 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.365775 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.365790 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.365813 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.365828 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:34Z","lastTransitionTime":"2025-11-24T11:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.467824 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.467873 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.467890 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.467910 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.467926 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:34Z","lastTransitionTime":"2025-11-24T11:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.571841 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.571929 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.571945 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.571983 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.571999 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:34Z","lastTransitionTime":"2025-11-24T11:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.675300 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.675389 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.675437 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.675461 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.675478 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:34Z","lastTransitionTime":"2025-11-24T11:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.779759 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.780291 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.780515 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.780706 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.780857 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:34Z","lastTransitionTime":"2025-11-24T11:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.884065 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.884108 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.884118 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.884135 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.884145 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:34Z","lastTransitionTime":"2025-11-24T11:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.894535 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:34 crc kubenswrapper[4678]: E1124 11:17:34.894804 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.987744 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.988290 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.988472 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.988908 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:34 crc kubenswrapper[4678]: I1124 11:17:34.989077 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:34Z","lastTransitionTime":"2025-11-24T11:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.092049 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.092085 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.092094 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.092115 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.092124 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:35Z","lastTransitionTime":"2025-11-24T11:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.195100 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.195180 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.195207 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.195237 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.195261 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:35Z","lastTransitionTime":"2025-11-24T11:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.297877 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.297933 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.297948 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.297971 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.297986 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:35Z","lastTransitionTime":"2025-11-24T11:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.401156 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.401214 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.401231 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.401257 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.401274 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:35Z","lastTransitionTime":"2025-11-24T11:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.505225 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.505287 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.505297 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.505316 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.505329 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:35Z","lastTransitionTime":"2025-11-24T11:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.608061 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.608344 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.608406 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.608522 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.608600 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:35Z","lastTransitionTime":"2025-11-24T11:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.711268 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.711331 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.711348 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.711366 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.711377 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:35Z","lastTransitionTime":"2025-11-24T11:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.814313 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.814384 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.814399 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.814421 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.814435 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:35Z","lastTransitionTime":"2025-11-24T11:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.895036 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.895092 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.895106 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:35 crc kubenswrapper[4678]: E1124 11:17:35.895263 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:35 crc kubenswrapper[4678]: E1124 11:17:35.895387 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:35 crc kubenswrapper[4678]: E1124 11:17:35.895485 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.896301 4678 scope.go:117] "RemoveContainer" containerID="5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9" Nov 24 11:17:35 crc kubenswrapper[4678]: E1124 11:17:35.896498 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.917095 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.917161 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.917180 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.917203 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:35 crc kubenswrapper[4678]: I1124 11:17:35.917217 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:35Z","lastTransitionTime":"2025-11-24T11:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.019586 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.019660 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.019702 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.019722 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.019734 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:36Z","lastTransitionTime":"2025-11-24T11:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.122693 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.122756 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.122772 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.122800 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.122816 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:36Z","lastTransitionTime":"2025-11-24T11:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.225470 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.225555 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.225566 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.225583 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.225594 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:36Z","lastTransitionTime":"2025-11-24T11:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.328975 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.329039 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.329057 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.329079 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.329095 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:36Z","lastTransitionTime":"2025-11-24T11:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.431696 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.431745 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.431753 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.431770 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.431783 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:36Z","lastTransitionTime":"2025-11-24T11:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.534935 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.535021 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.535044 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.535078 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.535110 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:36Z","lastTransitionTime":"2025-11-24T11:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.638055 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.638132 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.638157 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.638195 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.638217 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:36Z","lastTransitionTime":"2025-11-24T11:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.741627 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.741703 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.741714 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.741733 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.741746 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:36Z","lastTransitionTime":"2025-11-24T11:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.844504 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.844557 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.844574 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.844598 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.844612 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:36Z","lastTransitionTime":"2025-11-24T11:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.895139 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:36 crc kubenswrapper[4678]: E1124 11:17:36.895311 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.947903 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.947956 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.947968 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.947987 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:36 crc kubenswrapper[4678]: I1124 11:17:36.948000 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:36Z","lastTransitionTime":"2025-11-24T11:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.050792 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.051095 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.051296 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.051420 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.051494 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:37Z","lastTransitionTime":"2025-11-24T11:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.153853 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.153886 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.153899 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.153917 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.153927 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:37Z","lastTransitionTime":"2025-11-24T11:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.216718 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.217091 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.217210 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.217380 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.217475 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:37Z","lastTransitionTime":"2025-11-24T11:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:37 crc kubenswrapper[4678]: E1124 11:17:37.235352 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.241323 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.241370 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.241387 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.241442 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.241460 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:37Z","lastTransitionTime":"2025-11-24T11:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:37 crc kubenswrapper[4678]: E1124 11:17:37.259425 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.264180 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.264228 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.264242 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.264265 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.264282 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:37Z","lastTransitionTime":"2025-11-24T11:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:37 crc kubenswrapper[4678]: E1124 11:17:37.278572 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.283165 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.283212 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.283237 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.283289 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.283303 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:37Z","lastTransitionTime":"2025-11-24T11:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:37 crc kubenswrapper[4678]: E1124 11:17:37.298692 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.303732 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.303785 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.303797 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.303818 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.303830 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:37Z","lastTransitionTime":"2025-11-24T11:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:37 crc kubenswrapper[4678]: E1124 11:17:37.323707 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:37 crc kubenswrapper[4678]: E1124 11:17:37.323995 4678 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.326349 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.326389 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.326400 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.326418 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.326432 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:37Z","lastTransitionTime":"2025-11-24T11:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.428978 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.429022 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.429035 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.429055 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.429068 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:37Z","lastTransitionTime":"2025-11-24T11:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.531897 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.531947 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.531963 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.531986 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.532002 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:37Z","lastTransitionTime":"2025-11-24T11:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.635901 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.635958 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.635970 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.635992 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.636007 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:37Z","lastTransitionTime":"2025-11-24T11:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.739059 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.739128 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.739177 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.739201 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.739215 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:37Z","lastTransitionTime":"2025-11-24T11:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.836593 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs\") pod \"network-metrics-daemon-pg6bk\" (UID: \"dca80848-6c0a-4946-980a-197e2ecfc898\") " pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:37 crc kubenswrapper[4678]: E1124 11:17:37.836807 4678 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:17:37 crc kubenswrapper[4678]: E1124 11:17:37.836932 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs podName:dca80848-6c0a-4946-980a-197e2ecfc898 nodeName:}" failed. No retries permitted until 2025-11-24 11:18:09.83690513 +0000 UTC m=+100.767964769 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs") pod "network-metrics-daemon-pg6bk" (UID: "dca80848-6c0a-4946-980a-197e2ecfc898") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.842915 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.843010 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.843038 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.843073 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.843101 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:37Z","lastTransitionTime":"2025-11-24T11:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.895285 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:37 crc kubenswrapper[4678]: E1124 11:17:37.895537 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.895906 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:37 crc kubenswrapper[4678]: E1124 11:17:37.896039 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.896777 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:37 crc kubenswrapper[4678]: E1124 11:17:37.896919 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.945925 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.946286 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.946393 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.946561 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:37 crc kubenswrapper[4678]: I1124 11:17:37.946648 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:37Z","lastTransitionTime":"2025-11-24T11:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.049239 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.049297 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.049307 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.049324 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.049335 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:38Z","lastTransitionTime":"2025-11-24T11:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.152343 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.152700 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.152775 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.152861 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.152930 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:38Z","lastTransitionTime":"2025-11-24T11:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.255705 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.256021 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.256105 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.256207 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.256296 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:38Z","lastTransitionTime":"2025-11-24T11:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.359205 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.359266 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.359275 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.359310 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.359322 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:38Z","lastTransitionTime":"2025-11-24T11:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.462898 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.462960 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.462970 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.462985 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.462996 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:38Z","lastTransitionTime":"2025-11-24T11:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.566776 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.566828 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.566840 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.566858 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.566872 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:38Z","lastTransitionTime":"2025-11-24T11:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.669850 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.669890 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.669898 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.669916 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.669926 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:38Z","lastTransitionTime":"2025-11-24T11:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.772547 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.772596 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.772609 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.772630 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.772646 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:38Z","lastTransitionTime":"2025-11-24T11:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.876173 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.876219 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.876228 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.876244 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.876257 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:38Z","lastTransitionTime":"2025-11-24T11:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.895849 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:38 crc kubenswrapper[4678]: E1124 11:17:38.896094 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.979004 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.979047 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.979056 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.979076 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:38 crc kubenswrapper[4678]: I1124 11:17:38.979087 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:38Z","lastTransitionTime":"2025-11-24T11:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.082080 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.082152 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.082166 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.082186 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.082200 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:39Z","lastTransitionTime":"2025-11-24T11:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.185412 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.185480 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.185497 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.185525 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.185543 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:39Z","lastTransitionTime":"2025-11-24T11:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.288778 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.288839 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.288852 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.288874 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.288888 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:39Z","lastTransitionTime":"2025-11-24T11:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.391902 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.391941 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.391953 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.391972 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.391985 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:39Z","lastTransitionTime":"2025-11-24T11:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.494932 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.494994 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.495010 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.495035 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.495052 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:39Z","lastTransitionTime":"2025-11-24T11:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.598352 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.598408 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.598419 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.598439 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.598451 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:39Z","lastTransitionTime":"2025-11-24T11:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.701076 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.701129 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.701139 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.701159 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.701172 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:39Z","lastTransitionTime":"2025-11-24T11:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.804090 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.804141 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.804153 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.804177 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.804196 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:39Z","lastTransitionTime":"2025-11-24T11:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.895959 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.896047 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:39 crc kubenswrapper[4678]: E1124 11:17:39.896136 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:39 crc kubenswrapper[4678]: E1124 11:17:39.896291 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.896570 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:39 crc kubenswrapper[4678]: E1124 11:17:39.896925 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.907353 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.907402 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.907412 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.907430 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.907452 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:39Z","lastTransitionTime":"2025-11-24T11:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.911662 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.932141 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.945622 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.959501 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.972154 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:39 crc kubenswrapper[4678]: I1124 11:17:39.988963 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:39Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.007146 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.009330 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.009379 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.009388 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.009405 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.009416 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:40Z","lastTransitionTime":"2025-11-24T11:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.021610 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c10b0c-04de-4b5b-b189-f778a0568443\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f003b0cfebb220e52792a5c28177053e295937e8fbd289da58977ba41c1d6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44115f400ac4e25614d1c5c574fa5ff30b17375cab9d21a0deffbbb1d537a485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa6b6c8b246f233d00d8ab09e894ced7543605acce05cf29502d4a44b959feed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.039442 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.054366 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.070435 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.083164 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.094536 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.112569 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.112620 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.112630 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.112654 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.112690 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:40Z","lastTransitionTime":"2025-11-24T11:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.115236 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:21Z\\\",\\\"message\\\":\\\"60\\\\nI1124 11:17:21.013817 6389 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:17:21.014704 6389 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:17:21.014784 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:17:21.014833 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:17:21.015147 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:17:21.015189 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:17:21.015197 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:17:21.015229 6389 factory.go:656] Stopping watch factory\\\\nI1124 11:17:21.015318 6389 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:17:21.015397 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:17:21.015417 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:17:21.015428 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:17:21.015438 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:17:21.015448 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.127311 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.144712 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.160710 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.216247 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.216300 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.216311 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.216331 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.216342 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:40Z","lastTransitionTime":"2025-11-24T11:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.319134 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.319175 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.319184 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.319201 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.319212 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:40Z","lastTransitionTime":"2025-11-24T11:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.353613 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h24xv_f159c812-75d9-4ad6-9e20-4d208ffe42fb/kube-multus/0.log" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.354217 4678 generic.go:334] "Generic (PLEG): container finished" podID="f159c812-75d9-4ad6-9e20-4d208ffe42fb" containerID="8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71" exitCode=1 Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.354418 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h24xv" event={"ID":"f159c812-75d9-4ad6-9e20-4d208ffe42fb","Type":"ContainerDied","Data":"8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71"} Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.355090 4678 scope.go:117] "RemoveContainer" containerID="8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.374176 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.390837 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.402193 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.413034 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.423241 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.423305 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.423323 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.423351 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.423371 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:40Z","lastTransitionTime":"2025-11-24T11:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.426813 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.437844 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.452950 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.468932 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.483021 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c10b0c-04de-4b5b-b189-f778a0568443\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f003b0cfebb220e52792a5c28177053e295937e8fbd289da58977ba41c1d6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44115f400ac4e25614d1c5c574fa5ff30b17375cab9d21a0deffbbb1d537a485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa6b6c8b246f233d00d8ab09e894ced7543605acce05cf29502d4a44b959feed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.500327 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.512893 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.526412 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.526713 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.526736 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.526747 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.526766 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.526780 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:40Z","lastTransitionTime":"2025-11-24T11:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.538816 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.559118 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:21Z\\\",\\\"message\\\":\\\"60\\\\nI1124 11:17:21.013817 6389 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:17:21.014704 6389 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:17:21.014784 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:17:21.014833 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:17:21.015147 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:17:21.015189 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:17:21.015197 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:17:21.015229 6389 factory.go:656] Stopping watch factory\\\\nI1124 11:17:21.015318 6389 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:17:21.015397 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:17:21.015417 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:17:21.015428 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:17:21.015438 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:17:21.015448 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.572913 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.589092 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:39Z\\\",\\\"message\\\":\\\"2025-11-24T11:16:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_26b90776-d8a7-468d-803e-672195447928\\\\n2025-11-24T11:16:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_26b90776-d8a7-468d-803e-672195447928 to /host/opt/cni/bin/\\\\n2025-11-24T11:16:54Z [verbose] multus-daemon started\\\\n2025-11-24T11:16:54Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:17:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.606169 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.630013 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.630064 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.630074 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.630093 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.630110 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:40Z","lastTransitionTime":"2025-11-24T11:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.732380 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.732424 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.732435 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.732452 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.732462 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:40Z","lastTransitionTime":"2025-11-24T11:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.835778 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.835853 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.835874 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.835902 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.835923 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:40Z","lastTransitionTime":"2025-11-24T11:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.895538 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:40 crc kubenswrapper[4678]: E1124 11:17:40.895726 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.939866 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.939969 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.939989 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.940017 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:40 crc kubenswrapper[4678]: I1124 11:17:40.940035 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:40Z","lastTransitionTime":"2025-11-24T11:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.043296 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.043641 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.043749 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.043828 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.043902 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:41Z","lastTransitionTime":"2025-11-24T11:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.147025 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.147074 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.147083 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.147099 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.147111 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:41Z","lastTransitionTime":"2025-11-24T11:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.249946 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.250030 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.250056 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.250095 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.250123 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:41Z","lastTransitionTime":"2025-11-24T11:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.355963 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.356199 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.356265 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.356313 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.356360 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:41Z","lastTransitionTime":"2025-11-24T11:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.365556 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h24xv_f159c812-75d9-4ad6-9e20-4d208ffe42fb/kube-multus/0.log" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.365636 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h24xv" event={"ID":"f159c812-75d9-4ad6-9e20-4d208ffe42fb","Type":"ContainerStarted","Data":"d533b7bca5d15993708d525de6488e5c07fddad973c2148c82257608bf32e801"} Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.381758 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.399736 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.416581 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.429849 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.443752 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.455468 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.463101 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.463369 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.463395 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.463420 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.463435 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:41Z","lastTransitionTime":"2025-11-24T11:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.478113 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.498306 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.512105 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c10b0c-04de-4b5b-b189-f778a0568443\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f003b0cfebb220e52792a5c28177053e295937e8fbd289da58977ba41c1d6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44115f400ac4e25614d1c5c574fa5ff30b17375cab9d21a0deffbbb1d537a485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa6b6c8b246f233d00d8ab09e894ced7543605acce05cf29502d4a44b959feed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.532194 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.547203 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.561687 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.565581 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.565631 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.565640 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.565661 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.565697 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:41Z","lastTransitionTime":"2025-11-24T11:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.577847 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.591503 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.614425 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:21Z\\\",\\\"message\\\":\\\"60\\\\nI1124 11:17:21.013817 6389 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:17:21.014704 6389 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:17:21.014784 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:17:21.014833 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:17:21.015147 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:17:21.015189 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:17:21.015197 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:17:21.015229 6389 factory.go:656] Stopping watch factory\\\\nI1124 11:17:21.015318 6389 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:17:21.015397 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:17:21.015417 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:17:21.015428 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:17:21.015438 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:17:21.015448 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.627745 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.642465 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d533b7bca5d15993708d525de6488e5c07fddad973c2148c82257608bf32e801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:39Z\\\",\\\"message\\\":\\\"2025-11-24T11:16:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_26b90776-d8a7-468d-803e-672195447928\\\\n2025-11-24T11:16:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_26b90776-d8a7-468d-803e-672195447928 to /host/opt/cni/bin/\\\\n2025-11-24T11:16:54Z [verbose] multus-daemon started\\\\n2025-11-24T11:16:54Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:17:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:41Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.668985 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.669036 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.669049 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.669068 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.669086 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:41Z","lastTransitionTime":"2025-11-24T11:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.772774 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.772833 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.772843 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.772858 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.772871 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:41Z","lastTransitionTime":"2025-11-24T11:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.876310 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.876373 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.876390 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.876417 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.876435 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:41Z","lastTransitionTime":"2025-11-24T11:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.894960 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.895281 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:41 crc kubenswrapper[4678]: E1124 11:17:41.895436 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.895693 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:41 crc kubenswrapper[4678]: E1124 11:17:41.895752 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:41 crc kubenswrapper[4678]: E1124 11:17:41.896218 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.983028 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.983089 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.983103 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.983126 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:41 crc kubenswrapper[4678]: I1124 11:17:41.983142 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:41Z","lastTransitionTime":"2025-11-24T11:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.086700 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.086747 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.086756 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.086772 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.086783 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:42Z","lastTransitionTime":"2025-11-24T11:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.190518 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.191077 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.191293 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.191477 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.191636 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:42Z","lastTransitionTime":"2025-11-24T11:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.294492 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.294545 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.294556 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.294575 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.294588 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:42Z","lastTransitionTime":"2025-11-24T11:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.397542 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.397650 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.397696 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.397722 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.397742 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:42Z","lastTransitionTime":"2025-11-24T11:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.501266 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.501338 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.501355 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.501381 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.501399 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:42Z","lastTransitionTime":"2025-11-24T11:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.605280 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.605326 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.605340 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.605356 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.605366 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:42Z","lastTransitionTime":"2025-11-24T11:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.709909 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.709987 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.710010 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.710038 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.710071 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:42Z","lastTransitionTime":"2025-11-24T11:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.813703 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.813805 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.813827 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.813855 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.813876 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:42Z","lastTransitionTime":"2025-11-24T11:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.894901 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:42 crc kubenswrapper[4678]: E1124 11:17:42.895074 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.917038 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.917074 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.917113 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.917130 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:42 crc kubenswrapper[4678]: I1124 11:17:42.917141 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:42Z","lastTransitionTime":"2025-11-24T11:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.021162 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.021239 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.021249 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.021272 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.021285 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:43Z","lastTransitionTime":"2025-11-24T11:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.124977 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.125036 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.125046 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.125065 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.125080 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:43Z","lastTransitionTime":"2025-11-24T11:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.228557 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.228617 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.228632 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.228654 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.228687 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:43Z","lastTransitionTime":"2025-11-24T11:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.332435 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.332507 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.332530 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.332563 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.332588 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:43Z","lastTransitionTime":"2025-11-24T11:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.435616 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.435749 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.435770 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.435798 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.435823 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:43Z","lastTransitionTime":"2025-11-24T11:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.539295 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.539362 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.539384 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.539413 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.539434 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:43Z","lastTransitionTime":"2025-11-24T11:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.642868 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.642909 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.642919 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.642934 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.642945 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:43Z","lastTransitionTime":"2025-11-24T11:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.745812 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.745855 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.745864 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.745879 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.745889 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:43Z","lastTransitionTime":"2025-11-24T11:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.848774 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.848861 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.848884 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.848914 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.848936 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:43Z","lastTransitionTime":"2025-11-24T11:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.894996 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.895068 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:43 crc kubenswrapper[4678]: E1124 11:17:43.895253 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.895270 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:43 crc kubenswrapper[4678]: E1124 11:17:43.895425 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:43 crc kubenswrapper[4678]: E1124 11:17:43.895627 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.953220 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.953294 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.953319 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.953358 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:43 crc kubenswrapper[4678]: I1124 11:17:43.953385 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:43Z","lastTransitionTime":"2025-11-24T11:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.056404 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.056478 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.056500 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.056536 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.056560 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:44Z","lastTransitionTime":"2025-11-24T11:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.160347 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.160436 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.160462 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.160494 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.160516 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:44Z","lastTransitionTime":"2025-11-24T11:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.264961 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.265026 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.265047 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.265077 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.265102 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:44Z","lastTransitionTime":"2025-11-24T11:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.370115 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.370179 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.370198 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.370240 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.370262 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:44Z","lastTransitionTime":"2025-11-24T11:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.474943 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.475130 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.475158 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.475186 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.475199 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:44Z","lastTransitionTime":"2025-11-24T11:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.578348 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.578403 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.578416 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.578437 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.578454 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:44Z","lastTransitionTime":"2025-11-24T11:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.681427 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.681506 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.681525 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.681555 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.681574 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:44Z","lastTransitionTime":"2025-11-24T11:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.785485 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.785555 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.785579 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.785612 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.785635 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:44Z","lastTransitionTime":"2025-11-24T11:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.889123 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.889223 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.889243 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.889267 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.889284 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:44Z","lastTransitionTime":"2025-11-24T11:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.895406 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:44 crc kubenswrapper[4678]: E1124 11:17:44.895566 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.993076 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.993141 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.993204 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.993230 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:44 crc kubenswrapper[4678]: I1124 11:17:44.993246 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:44Z","lastTransitionTime":"2025-11-24T11:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.097594 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.097714 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.097738 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.097772 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.097798 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:45Z","lastTransitionTime":"2025-11-24T11:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.201558 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.202139 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.202172 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.202197 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.202215 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:45Z","lastTransitionTime":"2025-11-24T11:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.305875 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.306430 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.306714 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.306980 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.307182 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:45Z","lastTransitionTime":"2025-11-24T11:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.410604 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.411195 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.411399 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.411557 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.411788 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:45Z","lastTransitionTime":"2025-11-24T11:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.516414 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.516500 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.516525 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.516559 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.516582 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:45Z","lastTransitionTime":"2025-11-24T11:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.620648 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.620756 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.620776 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.620801 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.620824 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:45Z","lastTransitionTime":"2025-11-24T11:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.724394 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.724462 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.724483 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.724511 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.724533 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:45Z","lastTransitionTime":"2025-11-24T11:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.828385 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.828482 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.828500 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.828530 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.828547 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:45Z","lastTransitionTime":"2025-11-24T11:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.894918 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.895026 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.894954 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:45 crc kubenswrapper[4678]: E1124 11:17:45.895151 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:45 crc kubenswrapper[4678]: E1124 11:17:45.895380 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:45 crc kubenswrapper[4678]: E1124 11:17:45.895482 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.932351 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.932451 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.932470 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.932528 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:45 crc kubenswrapper[4678]: I1124 11:17:45.932548 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:45Z","lastTransitionTime":"2025-11-24T11:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.036256 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.036355 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.036415 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.036446 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.036500 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:46Z","lastTransitionTime":"2025-11-24T11:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.140222 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.140269 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.140280 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.140300 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.140312 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:46Z","lastTransitionTime":"2025-11-24T11:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.242718 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.242819 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.242843 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.242873 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.242890 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:46Z","lastTransitionTime":"2025-11-24T11:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.345203 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.345263 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.345274 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.345295 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.345309 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:46Z","lastTransitionTime":"2025-11-24T11:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.448070 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.448151 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.448161 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.448182 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.448195 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:46Z","lastTransitionTime":"2025-11-24T11:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.551354 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.551429 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.551446 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.551477 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.551494 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:46Z","lastTransitionTime":"2025-11-24T11:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.654156 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.654216 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.654231 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.654259 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.654272 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:46Z","lastTransitionTime":"2025-11-24T11:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.756482 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.756526 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.756540 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.756560 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.756571 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:46Z","lastTransitionTime":"2025-11-24T11:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.859910 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.859994 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.860014 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.860042 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.860062 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:46Z","lastTransitionTime":"2025-11-24T11:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.894606 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:46 crc kubenswrapper[4678]: E1124 11:17:46.894808 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.963372 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.963427 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.963437 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.963458 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:46 crc kubenswrapper[4678]: I1124 11:17:46.963473 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:46Z","lastTransitionTime":"2025-11-24T11:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.066786 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.066867 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.066895 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.066920 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.066938 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:47Z","lastTransitionTime":"2025-11-24T11:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.170133 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.170205 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.170724 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.170758 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.170774 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:47Z","lastTransitionTime":"2025-11-24T11:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.273786 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.273853 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.273871 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.273890 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.273904 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:47Z","lastTransitionTime":"2025-11-24T11:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.377000 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.377060 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.377071 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.377093 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.377114 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:47Z","lastTransitionTime":"2025-11-24T11:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.480344 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.480391 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.480400 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.480419 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.480430 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:47Z","lastTransitionTime":"2025-11-24T11:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.583596 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.583654 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.583689 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.583709 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.583722 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:47Z","lastTransitionTime":"2025-11-24T11:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.637604 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.637664 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.637690 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.637709 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.637722 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:47Z","lastTransitionTime":"2025-11-24T11:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:47 crc kubenswrapper[4678]: E1124 11:17:47.655263 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.660226 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.660280 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.660299 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.660323 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.660342 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:47Z","lastTransitionTime":"2025-11-24T11:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:47 crc kubenswrapper[4678]: E1124 11:17:47.684072 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.695658 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.696112 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.696264 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.696445 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.696626 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:47Z","lastTransitionTime":"2025-11-24T11:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:47 crc kubenswrapper[4678]: E1124 11:17:47.718914 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.723766 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.724086 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.724243 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.724434 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.724603 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:47Z","lastTransitionTime":"2025-11-24T11:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:47 crc kubenswrapper[4678]: E1124 11:17:47.747224 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.754025 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.754081 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.754097 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.754113 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.754125 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:47Z","lastTransitionTime":"2025-11-24T11:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:47 crc kubenswrapper[4678]: E1124 11:17:47.775194 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:47 crc kubenswrapper[4678]: E1124 11:17:47.776537 4678 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.779246 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.779636 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.779989 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.780143 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.780271 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:47Z","lastTransitionTime":"2025-11-24T11:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.884167 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.884710 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.884886 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.885051 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.885189 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:47Z","lastTransitionTime":"2025-11-24T11:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.894728 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:47 crc kubenswrapper[4678]: E1124 11:17:47.894864 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.894727 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:47 crc kubenswrapper[4678]: E1124 11:17:47.894950 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.894727 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:47 crc kubenswrapper[4678]: E1124 11:17:47.895370 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.988791 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.988862 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.988879 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.988909 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:47 crc kubenswrapper[4678]: I1124 11:17:47.988930 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:47Z","lastTransitionTime":"2025-11-24T11:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.093274 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.093363 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.093376 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.093418 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.093433 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:48Z","lastTransitionTime":"2025-11-24T11:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.196835 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.196922 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.196935 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.196984 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.197006 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:48Z","lastTransitionTime":"2025-11-24T11:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.301147 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.301200 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.301210 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.301230 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.301242 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:48Z","lastTransitionTime":"2025-11-24T11:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.404486 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.404558 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.404576 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.404598 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.404614 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:48Z","lastTransitionTime":"2025-11-24T11:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.508419 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.508491 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.508508 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.508534 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.508551 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:48Z","lastTransitionTime":"2025-11-24T11:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.611327 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.611387 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.611405 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.611429 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.611445 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:48Z","lastTransitionTime":"2025-11-24T11:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.714688 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.714747 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.714760 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.714782 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.714794 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:48Z","lastTransitionTime":"2025-11-24T11:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.818051 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.818143 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.818168 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.818201 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.818230 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:48Z","lastTransitionTime":"2025-11-24T11:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.895502 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:48 crc kubenswrapper[4678]: E1124 11:17:48.895738 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.921859 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.921904 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.921914 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.921931 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:48 crc kubenswrapper[4678]: I1124 11:17:48.921942 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:48Z","lastTransitionTime":"2025-11-24T11:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.025211 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.025268 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.025285 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.025314 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.025332 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:49Z","lastTransitionTime":"2025-11-24T11:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.127736 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.128144 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.128359 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.128546 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.128785 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:49Z","lastTransitionTime":"2025-11-24T11:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.232560 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.233032 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.233236 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.233448 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.233645 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:49Z","lastTransitionTime":"2025-11-24T11:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.337044 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.337089 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.337100 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.337114 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.337124 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:49Z","lastTransitionTime":"2025-11-24T11:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.440174 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.440235 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.440257 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.440288 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.440313 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:49Z","lastTransitionTime":"2025-11-24T11:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.543119 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.543151 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.543159 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.543174 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.543185 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:49Z","lastTransitionTime":"2025-11-24T11:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.651421 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.651498 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.651517 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.651663 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.651719 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:49Z","lastTransitionTime":"2025-11-24T11:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.755222 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.755280 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.755297 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.755321 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.755339 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:49Z","lastTransitionTime":"2025-11-24T11:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.858752 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.858809 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.858829 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.858856 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.858874 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:49Z","lastTransitionTime":"2025-11-24T11:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.896463 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.896615 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:49 crc kubenswrapper[4678]: E1124 11:17:49.896652 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.896719 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:49 crc kubenswrapper[4678]: E1124 11:17:49.896872 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:49 crc kubenswrapper[4678]: E1124 11:17:49.897412 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.897892 4678 scope.go:117] "RemoveContainer" containerID="5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.955626 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.967151 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.967210 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.967225 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.967245 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.967260 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:49Z","lastTransitionTime":"2025-11-24T11:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:49 crc kubenswrapper[4678]: I1124 11:17:49.979784 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.002535 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:49Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.021913 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.038012 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c10b0c-04de-4b5b-b189-f778a0568443\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f003b0cfebb220e52792a5c28177053e295937e8fbd289da58977ba41c1d6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44115f400ac4e25614d1c5c574fa5ff30b17375cab9d21a0deffbbb1d537a485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa6b6c8b246f233d00d8ab09e894ced7543605acce05cf29502d4a44b959feed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.056648 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.070477 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.070556 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.070575 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.070605 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.070625 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:50Z","lastTransitionTime":"2025-11-24T11:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.077601 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.095495 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.121637 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:21Z\\\",\\\"message\\\":\\\"60\\\\nI1124 11:17:21.013817 6389 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:17:21.014704 6389 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:17:21.014784 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:17:21.014833 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:17:21.015147 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:17:21.015189 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:17:21.015197 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:17:21.015229 6389 factory.go:656] Stopping watch factory\\\\nI1124 11:17:21.015318 6389 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:17:21.015397 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:17:21.015417 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:17:21.015428 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:17:21.015438 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:17:21.015448 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.137370 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.157383 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d533b7bca5d15993708d525de6488e5c07fddad973c2148c82257608bf32e801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:39Z\\\",\\\"message\\\":\\\"2025-11-24T11:16:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_26b90776-d8a7-468d-803e-672195447928\\\\n2025-11-24T11:16:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_26b90776-d8a7-468d-803e-672195447928 to /host/opt/cni/bin/\\\\n2025-11-24T11:16:54Z [verbose] multus-daemon started\\\\n2025-11-24T11:16:54Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:17:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.174441 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.175399 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.175456 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.175473 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.175497 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.175514 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:50Z","lastTransitionTime":"2025-11-24T11:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.195439 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.216575 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.237218 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.255595 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.266750 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.280958 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.281019 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.281038 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.281062 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.281079 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:50Z","lastTransitionTime":"2025-11-24T11:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.383542 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.383839 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.383933 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.384004 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.384063 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:50Z","lastTransitionTime":"2025-11-24T11:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.402452 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovnkube-controller/2.log" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.405166 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerStarted","Data":"ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938"} Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.406217 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.423042 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.443113 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.459202 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.473989 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c10b0c-04de-4b5b-b189-f778a0568443\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f003b0cfebb220e52792a5c28177053e295937e8fbd289da58977ba41c1d6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44115f400ac4e25614d1c5c574fa5ff30b17375cab9d21a0deffbbb1d537a485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa6b6c8b246f233d00d8ab09e894ced7543605acce05cf29502d4a44b959feed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.487068 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.487329 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.487392 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.487462 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.487570 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:50Z","lastTransitionTime":"2025-11-24T11:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.490573 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.505208 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.522196 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.538438 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.560485 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:21Z\\\",\\\"message\\\":\\\"60\\\\nI1124 11:17:21.013817 6389 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:17:21.014704 6389 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:17:21.014784 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:17:21.014833 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:17:21.015147 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:17:21.015189 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:17:21.015197 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:17:21.015229 6389 factory.go:656] Stopping watch factory\\\\nI1124 11:17:21.015318 6389 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:17:21.015397 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:17:21.015417 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:17:21.015428 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:17:21.015438 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:17:21.015448 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.575208 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.589998 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.590063 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.590077 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.590096 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.590112 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:50Z","lastTransitionTime":"2025-11-24T11:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.591493 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d533b7bca5d15993708d525de6488e5c07fddad973c2148c82257608bf32e801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:39Z\\\",\\\"message\\\":\\\"2025-11-24T11:16:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_26b90776-d8a7-468d-803e-672195447928\\\\n2025-11-24T11:16:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_26b90776-d8a7-468d-803e-672195447928 to /host/opt/cni/bin/\\\\n2025-11-24T11:16:54Z [verbose] multus-daemon started\\\\n2025-11-24T11:16:54Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:17:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.606534 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.628172 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.656428 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.681749 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.692494 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.692530 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.692543 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.692561 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.692574 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:50Z","lastTransitionTime":"2025-11-24T11:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.700190 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.710761 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:50Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.795127 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.795179 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.795192 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.795214 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.795231 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:50Z","lastTransitionTime":"2025-11-24T11:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.895096 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:50 crc kubenswrapper[4678]: E1124 11:17:50.895262 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.897914 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.897954 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.897965 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.897980 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:50 crc kubenswrapper[4678]: I1124 11:17:50.897991 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:50Z","lastTransitionTime":"2025-11-24T11:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.000577 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.000623 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.000632 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.000654 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.000686 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:51Z","lastTransitionTime":"2025-11-24T11:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.103811 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.103878 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.103897 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.103924 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.103945 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:51Z","lastTransitionTime":"2025-11-24T11:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.207086 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.207162 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.207186 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.207221 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.207253 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:51Z","lastTransitionTime":"2025-11-24T11:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.310832 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.310872 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.310882 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.310901 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.310912 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:51Z","lastTransitionTime":"2025-11-24T11:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.411030 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovnkube-controller/3.log" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.412258 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovnkube-controller/2.log" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.413054 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.413113 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.413132 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.413203 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.413718 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:51Z","lastTransitionTime":"2025-11-24T11:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.416385 4678 generic.go:334] "Generic (PLEG): container finished" podID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerID="ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938" exitCode=1 Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.416437 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerDied","Data":"ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938"} Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.416491 4678 scope.go:117] "RemoveContainer" containerID="5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.418308 4678 scope.go:117] "RemoveContainer" containerID="ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938" Nov 24 11:17:51 crc kubenswrapper[4678]: E1124 11:17:51.418734 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.437785 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.459514 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.481710 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.501032 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.518092 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.518153 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.518169 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.518192 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.518207 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:51Z","lastTransitionTime":"2025-11-24T11:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.522256 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.553570 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.577407 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.595457 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.619897 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.622507 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.622551 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.622563 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.622582 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.622594 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:51Z","lastTransitionTime":"2025-11-24T11:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.642253 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.663556 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.686103 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.701539 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.716595 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c10b0c-04de-4b5b-b189-f778a0568443\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f003b0cfebb220e52792a5c28177053e295937e8fbd289da58977ba41c1d6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44115f400ac4e25614d1c5c574fa5ff30b17375cab9d21a0deffbbb1d537a485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa6b6c8b246f233d00d8ab09e894ced7543605acce05cf29502d4a44b959feed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.726079 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.726152 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.726168 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.726187 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.726201 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:51Z","lastTransitionTime":"2025-11-24T11:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.744714 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5012050c5c461fce0c485eca7173134620f3bb139dc30498619e6aa6ffb31bd9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:21Z\\\",\\\"message\\\":\\\"60\\\\nI1124 11:17:21.013817 6389 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 11:17:21.014704 6389 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 11:17:21.014784 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 11:17:21.014833 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 11:17:21.015147 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 11:17:21.015189 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 11:17:21.015197 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 11:17:21.015229 6389 factory.go:656] Stopping watch factory\\\\nI1124 11:17:21.015318 6389 ovnkube.go:599] Stopped ovnkube\\\\nI1124 11:17:21.015397 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 11:17:21.015417 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 11:17:21.015428 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 11:17:21.015438 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 11:17:21.015448 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:50Z\\\",\\\"message\\\":\\\"_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1124 11:17:50.949131 6768 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-7twxw\\\\nI1124 11:17:50.949156 6768 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1124 11:17:50.949166 6768 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-7twxw\\\\nI1124 11:17:50.949160 6768 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-snkj4\\\\nF1124 11:17:50.949171 6768 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.760351 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.777267 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d533b7bca5d15993708d525de6488e5c07fddad973c2148c82257608bf32e801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:39Z\\\",\\\"message\\\":\\\"2025-11-24T11:16:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_26b90776-d8a7-468d-803e-672195447928\\\\n2025-11-24T11:16:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_26b90776-d8a7-468d-803e-672195447928 to /host/opt/cni/bin/\\\\n2025-11-24T11:16:54Z [verbose] multus-daemon started\\\\n2025-11-24T11:16:54Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:17:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:51Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.829977 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.830062 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.830083 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.830113 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.830133 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:51Z","lastTransitionTime":"2025-11-24T11:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.894635 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.894711 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:51 crc kubenswrapper[4678]: E1124 11:17:51.894829 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.894904 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:51 crc kubenswrapper[4678]: E1124 11:17:51.895056 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:51 crc kubenswrapper[4678]: E1124 11:17:51.895172 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.916531 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.933418 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.933464 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.933476 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.933496 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:51 crc kubenswrapper[4678]: I1124 11:17:51.933509 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:51Z","lastTransitionTime":"2025-11-24T11:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.035974 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.036037 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.036055 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.036083 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.036098 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:52Z","lastTransitionTime":"2025-11-24T11:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.138989 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.139039 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.139056 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.139083 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.139101 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:52Z","lastTransitionTime":"2025-11-24T11:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.242164 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.242240 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.242258 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.242287 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.242306 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:52Z","lastTransitionTime":"2025-11-24T11:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.348452 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.348550 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.348577 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.348608 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.348634 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:52Z","lastTransitionTime":"2025-11-24T11:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.424190 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovnkube-controller/3.log" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.431457 4678 scope.go:117] "RemoveContainer" containerID="ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938" Nov 24 11:17:52 crc kubenswrapper[4678]: E1124 11:17:52.431814 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.452886 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.452973 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.453004 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.453039 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.453066 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:52Z","lastTransitionTime":"2025-11-24T11:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.453345 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.472450 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.487079 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c10b0c-04de-4b5b-b189-f778a0568443\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f003b0cfebb220e52792a5c28177053e295937e8fbd289da58977ba41c1d6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44115f400ac4e25614d1c5c574fa5ff30b17375cab9d21a0deffbbb1d537a485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa6b6c8b246f233d00d8ab09e894ced7543605acce05cf29502d4a44b959feed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.506244 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.532995 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.552808 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.556871 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.556974 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.556988 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.557015 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.557030 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:52Z","lastTransitionTime":"2025-11-24T11:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.573293 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.586789 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.612436 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:50Z\\\",\\\"message\\\":\\\"_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1124 11:17:50.949131 6768 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-7twxw\\\\nI1124 11:17:50.949156 6768 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1124 11:17:50.949166 6768 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-7twxw\\\\nI1124 11:17:50.949160 6768 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-snkj4\\\\nF1124 11:17:50.949171 6768 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.627576 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.655973 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9386351f-8669-4aea-b888-4fd3f8f687e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5de6aa867dd10462e39753512ef93c3e32b8baf2000b123a566044ea4072f362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51daef109047dbfd48f60c3088716c9fcfadd2ff94592e06240869573a49eaf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc7e60ec336db411b3c1192707fe68ff8477719c2df85787a88e041516cb833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db24eb51b717c58b3558d9ab761fd79be95cad4ea4a75936fd007a4c0c12dcb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://712fd467877cad1a6db913f343aaafa1330e9d13b00f29ac27541f3899915368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d856633caf65f681108821ea5c34705b1588bd7d839ab8c0630db4efe00241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d856633caf65f681108821ea5c34705b1588bd7d839ab8c0630db4efe00241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356b97141c23284d5aef42027f840aa50a4e31cb47f2b4ef88011c8c474e8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://356b97141c23284d5aef42027f840aa50a4e31cb47f2b4ef88011c8c474e8c2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6ee4678d6d88768c4f83f30bca0f06c9697da23bc35c1c43ea30a85bea50059e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ee4678d6d88768c4f83f30bca0f06c9697da23bc35c1c43ea30a85bea50059e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.660589 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.660700 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.660713 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.660736 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.660753 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:52Z","lastTransitionTime":"2025-11-24T11:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.672735 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d533b7bca5d15993708d525de6488e5c07fddad973c2148c82257608bf32e801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:39Z\\\",\\\"message\\\":\\\"2025-11-24T11:16:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_26b90776-d8a7-468d-803e-672195447928\\\\n2025-11-24T11:16:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_26b90776-d8a7-468d-803e-672195447928 to /host/opt/cni/bin/\\\\n2025-11-24T11:16:54Z [verbose] multus-daemon started\\\\n2025-11-24T11:16:54Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:17:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.688990 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.711309 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.732245 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.750825 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.764538 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.764600 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.764614 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.764642 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.764656 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:52Z","lastTransitionTime":"2025-11-24T11:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.768168 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.783886 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:52Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.867429 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.867504 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.867523 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.867550 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.867569 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:52Z","lastTransitionTime":"2025-11-24T11:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.895226 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:52 crc kubenswrapper[4678]: E1124 11:17:52.895465 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.971730 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.971789 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.971806 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.971833 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:52 crc kubenswrapper[4678]: I1124 11:17:52.971856 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:52Z","lastTransitionTime":"2025-11-24T11:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.074813 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.074902 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.074923 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.074950 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.074995 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:53Z","lastTransitionTime":"2025-11-24T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.178023 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.178092 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.178111 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.178139 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.178161 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:53Z","lastTransitionTime":"2025-11-24T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.282426 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.282496 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.282517 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.282552 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.282575 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:53Z","lastTransitionTime":"2025-11-24T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.385872 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.385943 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.385960 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.385987 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.386003 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:53Z","lastTransitionTime":"2025-11-24T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.427772 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.428006 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:53 crc kubenswrapper[4678]: E1124 11:17:53.428069 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:57.428024474 +0000 UTC m=+148.359084123 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.428160 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:53 crc kubenswrapper[4678]: E1124 11:17:53.428208 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:17:53 crc kubenswrapper[4678]: E1124 11:17:53.428240 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.428248 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:53 crc kubenswrapper[4678]: E1124 11:17:53.428265 4678 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:17:53 crc kubenswrapper[4678]: E1124 11:17:53.428350 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 11:18:57.428317922 +0000 UTC m=+148.359377601 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:17:53 crc kubenswrapper[4678]: E1124 11:17:53.428399 4678 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.428403 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:53 crc kubenswrapper[4678]: E1124 11:17:53.428434 4678 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:17:53 crc kubenswrapper[4678]: E1124 11:17:53.428457 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:18:57.428445836 +0000 UTC m=+148.359505585 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 11:17:53 crc kubenswrapper[4678]: E1124 11:17:53.428592 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 11:18:57.428563819 +0000 UTC m=+148.359623528 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 11:17:53 crc kubenswrapper[4678]: E1124 11:17:53.428618 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 11:17:53 crc kubenswrapper[4678]: E1124 11:17:53.428642 4678 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 11:17:53 crc kubenswrapper[4678]: E1124 11:17:53.428659 4678 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:17:53 crc kubenswrapper[4678]: E1124 11:17:53.428757 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 11:18:57.428741625 +0000 UTC m=+148.359801304 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.489536 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.489632 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.489659 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.489748 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.489772 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:53Z","lastTransitionTime":"2025-11-24T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.593314 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.593387 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.593408 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.593442 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.593462 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:53Z","lastTransitionTime":"2025-11-24T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.696827 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.696899 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.696920 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.696949 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.696967 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:53Z","lastTransitionTime":"2025-11-24T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.799445 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.799534 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.799557 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.799583 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.799601 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:53Z","lastTransitionTime":"2025-11-24T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.895055 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.895176 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:53 crc kubenswrapper[4678]: E1124 11:17:53.895353 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.895416 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:53 crc kubenswrapper[4678]: E1124 11:17:53.895544 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:53 crc kubenswrapper[4678]: E1124 11:17:53.895639 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.902934 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.902981 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.902990 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.903008 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.903018 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:53Z","lastTransitionTime":"2025-11-24T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:53 crc kubenswrapper[4678]: I1124 11:17:53.907646 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.005249 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.005280 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.005291 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.005306 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.005315 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:54Z","lastTransitionTime":"2025-11-24T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.107819 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.107881 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.107899 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.107925 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.107945 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:54Z","lastTransitionTime":"2025-11-24T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.210576 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.210645 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.210658 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.210701 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.210717 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:54Z","lastTransitionTime":"2025-11-24T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.314462 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.314593 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.314614 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.314644 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.314701 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:54Z","lastTransitionTime":"2025-11-24T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.417597 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.417660 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.417702 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.417727 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.417746 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:54Z","lastTransitionTime":"2025-11-24T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.521558 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.521643 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.521699 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.521736 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.521761 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:54Z","lastTransitionTime":"2025-11-24T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.625263 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.625313 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.625324 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.625342 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.625353 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:54Z","lastTransitionTime":"2025-11-24T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.728626 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.728705 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.728722 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.728743 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.728757 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:54Z","lastTransitionTime":"2025-11-24T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.832419 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.832483 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.832501 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.832526 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.832547 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:54Z","lastTransitionTime":"2025-11-24T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.895070 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:54 crc kubenswrapper[4678]: E1124 11:17:54.895283 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.935527 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.935569 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.935581 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.935598 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:54 crc kubenswrapper[4678]: I1124 11:17:54.935611 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:54Z","lastTransitionTime":"2025-11-24T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.038709 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.038778 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.038793 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.038812 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.038827 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:55Z","lastTransitionTime":"2025-11-24T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.142470 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.142978 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.143202 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.143579 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.143763 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:55Z","lastTransitionTime":"2025-11-24T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.247103 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.247182 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.247201 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.247234 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.247255 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:55Z","lastTransitionTime":"2025-11-24T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.351011 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.351083 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.351103 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.351128 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.351148 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:55Z","lastTransitionTime":"2025-11-24T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.453950 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.454016 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.454033 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.454058 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.454074 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:55Z","lastTransitionTime":"2025-11-24T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.557563 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.557614 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.557632 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.557657 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.557709 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:55Z","lastTransitionTime":"2025-11-24T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.660309 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.660381 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.660404 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.660434 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.660456 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:55Z","lastTransitionTime":"2025-11-24T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.763504 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.764012 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.764148 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.764341 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.764562 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:55Z","lastTransitionTime":"2025-11-24T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.867809 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.867902 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.867928 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.867957 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.867979 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:55Z","lastTransitionTime":"2025-11-24T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.894978 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:55 crc kubenswrapper[4678]: E1124 11:17:55.895187 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.895504 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:55 crc kubenswrapper[4678]: E1124 11:17:55.895649 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.896085 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:55 crc kubenswrapper[4678]: E1124 11:17:55.896271 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.972302 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.972374 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.972391 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.972419 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:55 crc kubenswrapper[4678]: I1124 11:17:55.972437 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:55Z","lastTransitionTime":"2025-11-24T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.075338 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.075410 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.075428 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.075455 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.075473 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:56Z","lastTransitionTime":"2025-11-24T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.178961 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.179024 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.179045 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.179075 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.179095 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:56Z","lastTransitionTime":"2025-11-24T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.282217 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.282581 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.282687 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.282769 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.282855 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:56Z","lastTransitionTime":"2025-11-24T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.386477 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.386550 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.386568 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.386596 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.386614 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:56Z","lastTransitionTime":"2025-11-24T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.489182 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.489569 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.489660 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.489959 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.490038 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:56Z","lastTransitionTime":"2025-11-24T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.593775 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.593993 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.594084 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.594118 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.594139 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:56Z","lastTransitionTime":"2025-11-24T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.697088 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.697145 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.697157 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.697177 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.697191 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:56Z","lastTransitionTime":"2025-11-24T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.799808 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.799859 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.799873 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.799893 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.799906 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:56Z","lastTransitionTime":"2025-11-24T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.894966 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:56 crc kubenswrapper[4678]: E1124 11:17:56.895182 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.903870 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.903920 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.903938 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.903960 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:56 crc kubenswrapper[4678]: I1124 11:17:56.903979 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:56Z","lastTransitionTime":"2025-11-24T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.007470 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.007613 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.007634 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.007664 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.007827 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:57Z","lastTransitionTime":"2025-11-24T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.110885 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.111010 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.111038 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.111072 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.111096 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:57Z","lastTransitionTime":"2025-11-24T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.214577 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.214627 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.214639 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.214659 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.214693 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:57Z","lastTransitionTime":"2025-11-24T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.317989 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.318032 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.318057 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.318075 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.318087 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:57Z","lastTransitionTime":"2025-11-24T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.421173 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.421221 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.421232 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.421253 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.421264 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:57Z","lastTransitionTime":"2025-11-24T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.523430 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.523916 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.523931 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.523953 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.523967 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:57Z","lastTransitionTime":"2025-11-24T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.627791 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.627851 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.627869 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.627893 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.627909 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:57Z","lastTransitionTime":"2025-11-24T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.731891 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.732019 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.732042 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.732067 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.732095 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:57Z","lastTransitionTime":"2025-11-24T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.836165 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.836306 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.836325 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.836349 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.836401 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:57Z","lastTransitionTime":"2025-11-24T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.895367 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:57 crc kubenswrapper[4678]: E1124 11:17:57.895568 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.895384 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.895719 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:57 crc kubenswrapper[4678]: E1124 11:17:57.895855 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:57 crc kubenswrapper[4678]: E1124 11:17:57.896111 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.939642 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.939755 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.939776 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.939839 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:57 crc kubenswrapper[4678]: I1124 11:17:57.939860 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:57Z","lastTransitionTime":"2025-11-24T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.015261 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.015346 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.015365 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.015843 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.015893 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:58Z","lastTransitionTime":"2025-11-24T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:58 crc kubenswrapper[4678]: E1124 11:17:58.041169 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.047408 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.047504 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.047561 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.047592 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.047643 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:58Z","lastTransitionTime":"2025-11-24T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:58 crc kubenswrapper[4678]: E1124 11:17:58.072607 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.080843 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.080958 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.081021 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.081053 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.081176 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:58Z","lastTransitionTime":"2025-11-24T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:58 crc kubenswrapper[4678]: E1124 11:17:58.105797 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.111738 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.111821 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.111862 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.111886 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.111901 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:58Z","lastTransitionTime":"2025-11-24T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:58 crc kubenswrapper[4678]: E1124 11:17:58.134272 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.140772 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.140850 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.140872 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.141253 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.141645 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:58Z","lastTransitionTime":"2025-11-24T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:58 crc kubenswrapper[4678]: E1124 11:17:58.158218 4678 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37fc4262-6086-4dd5-aa35-53966bd309d2\\\",\\\"systemUUID\\\":\\\"bab8289d-1a3e-4a7d-817f-6b8fdc970a7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:58Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:58 crc kubenswrapper[4678]: E1124 11:17:58.158412 4678 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.160872 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.160943 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.160957 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.160976 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.161022 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:58Z","lastTransitionTime":"2025-11-24T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.264401 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.264482 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.264505 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.264578 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.264602 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:58Z","lastTransitionTime":"2025-11-24T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.368186 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.368268 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.368294 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.368328 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.368350 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:58Z","lastTransitionTime":"2025-11-24T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.471920 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.471980 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.471997 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.472026 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.472048 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:58Z","lastTransitionTime":"2025-11-24T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.575203 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.575248 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.575259 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.575277 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.575289 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:58Z","lastTransitionTime":"2025-11-24T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.679192 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.679272 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.679335 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.679372 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.679410 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:58Z","lastTransitionTime":"2025-11-24T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.782136 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.782205 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.782229 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.782264 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.782287 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:58Z","lastTransitionTime":"2025-11-24T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.885596 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.885663 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.885730 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.885760 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.885784 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:58Z","lastTransitionTime":"2025-11-24T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.894535 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:17:58 crc kubenswrapper[4678]: E1124 11:17:58.894747 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.989370 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.989416 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.989427 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.989444 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:58 crc kubenswrapper[4678]: I1124 11:17:58.989454 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:58Z","lastTransitionTime":"2025-11-24T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.093340 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.093469 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.093493 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.093525 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.093547 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:59Z","lastTransitionTime":"2025-11-24T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.196729 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.196817 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.196836 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.196865 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.196886 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:59Z","lastTransitionTime":"2025-11-24T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.300177 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.300257 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.300277 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.300306 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.300327 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:59Z","lastTransitionTime":"2025-11-24T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.403350 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.403434 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.403454 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.403484 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.403503 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:59Z","lastTransitionTime":"2025-11-24T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.513832 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.514266 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.514432 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.514597 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.514759 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:59Z","lastTransitionTime":"2025-11-24T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.618378 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.618442 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.618461 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.618487 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.618504 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:59Z","lastTransitionTime":"2025-11-24T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.721898 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.722013 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.722032 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.722060 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.722079 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:59Z","lastTransitionTime":"2025-11-24T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.825154 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.825227 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.825250 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.825283 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.825308 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:59Z","lastTransitionTime":"2025-11-24T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.895388 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.895475 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:17:59 crc kubenswrapper[4678]: E1124 11:17:59.895605 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.895396 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:17:59 crc kubenswrapper[4678]: E1124 11:17:59.895772 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:17:59 crc kubenswrapper[4678]: E1124 11:17:59.895834 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.920206 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86d47af609a2882333474566e105d41e8e1c97f707c73283b4827bf8082e67f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e1175622434bb3598c600711f0d8765c088af943a88ea49bc4fa932688c2016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.929203 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.929278 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.929302 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.929334 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.929364 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:17:59Z","lastTransitionTime":"2025-11-24T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.941223 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7ceb4b-c0fc-4888-b251-a87db4a2665e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8cb1d03058b0ca613e3393082d325a59b75febea4f78dff9b6a56200f1c431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bkfd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhrs6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.961334 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b64ed0b-8ce8-48ee-bcb6-551fc853626a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb01958957786bd33fad41633c2cf974036762c3d524e03439b3adf578d57d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a6e34fdb0d0b48765cad824c1704bec2f5cf0728e4f4514d0662adde2f496e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7l9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zdtgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:17:59 crc kubenswrapper[4678]: I1124 11:17:59.979041 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dca80848-6c0a-4946-980a-197e2ecfc898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjxrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:17:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pg6bk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.001324 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.023117 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c10b0c-04de-4b5b-b189-f778a0568443\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f003b0cfebb220e52792a5c28177053e295937e8fbd289da58977ba41c1d6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44115f400ac4e25614d1c5c574fa5ff30b17375cab9d21a0deffbbb1d537a485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa6b6c8b246f233d00d8ab09e894ced7543605acce05cf29502d4a44b959feed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b2b20dcc58153b6bb434578fbf6e0ce826be39e37da56455e66153bf2e8a5e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.032477 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.032979 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.033120 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.033307 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.033456 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:00Z","lastTransitionTime":"2025-11-24T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.040112 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0aa992-79b7-4430-85d2-fce34936df01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc0d94ca16625145e4f0c3f5b3b888d95b483763219627621af1ee2fa9430f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d078ae92188296bcea886d9fa2ef790203fdfe36c7c25e59bf384a8db4099edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a24ad6373f4b8c364e2c30cdb2e3e7c74db55b9b87fa700d53b94b730bc08f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de368b82421e7222e4160a6ca34ddd4d9e484c164ca202ba4e21b918b42243b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://735ea46be9ea71a6b66bba87922e9750d47f6ab97b449ff35358f4fccf3fbfc7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T11:16:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 11:16:43.867146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 11:16:43.869343 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1203553852/tls.crt::/tmp/serving-cert-1203553852/tls.key\\\\\\\"\\\\nI1124 11:16:49.362457 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 11:16:49.375897 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 11:16:49.375932 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 11:16:49.375972 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 11:16:49.375981 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 11:16:49.397365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 11:16:49.397393 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 11:16:49.397401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 11:16:49.397405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 11:16:49.397407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 11:16:49.397411 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 11:16:49.397579 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 11:16:49.399326 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c20f71fc52fb12203e400cd706cd272378da1832fea66b24dbeab8202d44ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://311a35f6601067799c1cf7190b2355dabd6053325a7f9139f5060544837012a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.062386 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b847db8f3db2f3402d282fe183a7f4c87f66018b28b61347d23bbba0334b2f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.083772 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.107346 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.121377 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snkj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6ee7405-6c4a-4768-a467-0d931c4143da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b706cd855bd6c8699643070964bb69f95f37b51b13e4c4878ee34b714bf3f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gknxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snkj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.135633 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.135698 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.135712 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.135732 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.135744 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:00Z","lastTransitionTime":"2025-11-24T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.143998 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fdaea25-35e1-4a8b-aabd-ec50fb9af003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2b6e8be87ea64550a6170567010020b680e000d53faacd49997fd8e1ec5cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e90e6eb3d296c40a32bd82ad612bb812f38bfcf3eb459db72000ef80b44b922\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d062495e21105a484814952f92cc1aad971607bd19c1583bac48b70b9a3f2ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a3aba48eb0f2c6f8fcd2607a4ef638510e5583d5111a2f57e8c35efcb175d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c8f8d29337c2c629c4452d610aae40fefa18134b6b15d2d66aeb3b30b1013ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd686175e552b273f8b4338a136d8df5daba2926ab450120441ea465779b47b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe81e82f6a49f2e5fad5a98c05b3eafe347f04e49f6c44f60b9aaf6616ca31fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxcdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tnrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.162377 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a441c7d7-2de8-4ed6-b972-db8d8d55889d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf05a3417c192495121841df598cc2cded9d076e557924c17f73953118580b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc0452cfcca23844989d55ebcc2e8337acc58871be9cf9e1ef171256266bf828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93b90059ad00bcce6ef799d2c0f377ad31e0490438b58bd0378ca6d512ad3ff5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e401d483820f6a2e35c5c65a6f3ebc8cabfe4b91bb3c34f58620aedf53f87962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.178019 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7twxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"533ce88b-4af0-47e6-a890-d25fb0e126be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5ac2af34acad4a132f62e789bcdda9a30f1dc6ede4cb3f2a302b950f7e0f2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtjlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7twxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.202366 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"318b13d4-6c61-4b45-bb2f-0a7e243946a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:50Z\\\",\\\"message\\\":\\\"_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1124 11:17:50.949131 6768 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-7twxw\\\\nI1124 11:17:50.949156 6768 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1124 11:17:50.949166 6768 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-7twxw\\\\nI1124 11:17:50.949160 6768 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-snkj4\\\\nF1124 11:17:50.949171 6768 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:17:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqfl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsq5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.227940 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9386351f-8669-4aea-b888-4fd3f8f687e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5de6aa867dd10462e39753512ef93c3e32b8baf2000b123a566044ea4072f362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51daef109047dbfd48f60c3088716c9fcfadd2ff94592e06240869573a49eaf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc7e60ec336db411b3c1192707fe68ff8477719c2df85787a88e041516cb833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db24eb51b717c58b3558d9ab761fd79be95cad4ea4a75936fd007a4c0c12dcb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://712fd467877cad1a6db913f343aaafa1330e9d13b00f29ac27541f3899915368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d856633caf65f681108821ea5c34705b1588bd7d839ab8c0630db4efe00241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d856633caf65f681108821ea5c34705b1588bd7d839ab8c0630db4efe00241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356b97141c23284d5aef42027f840aa50a4e31cb47f2b4ef88011c8c474e8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://356b97141c23284d5aef42027f840aa50a4e31cb47f2b4ef88011c8c474e8c2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6ee4678d6d88768c4f83f30bca0f06c9697da23bc35c1c43ea30a85bea50059e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ee4678d6d88768c4f83f30bca0f06c9697da23bc35c1c43ea30a85bea50059e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.238850 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.238916 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.238934 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.238960 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.238978 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:00Z","lastTransitionTime":"2025-11-24T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.244858 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h24xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f159c812-75d9-4ad6-9e20-4d208ffe42fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d533b7bca5d15993708d525de6488e5c07fddad973c2148c82257608bf32e801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T11:17:39Z\\\",\\\"message\\\":\\\"2025-11-24T11:16:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_26b90776-d8a7-468d-803e-672195447928\\\\n2025-11-24T11:16:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_26b90776-d8a7-468d-803e-672195447928 to /host/opt/cni/bin/\\\\n2025-11-24T11:16:54Z [verbose] multus-daemon started\\\\n2025-11-24T11:16:54Z [verbose] Readiness Indicator file check\\\\n2025-11-24T11:17:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:17:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4lswb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h24xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.260263 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f35cb21-f825-4eed-9250-d34739f8db54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ac819763d72864a1a144895080910c2a12faba46c1b761c5e37ae284bed137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c49cae4300d033a193064ef4f0b98aa8468fff60d6b21067a0e9cd48965fc03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c49cae4300d033a193064ef4f0b98aa8468fff60d6b21067a0e9cd48965fc03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T11:16:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T11:16:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T11:16:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.274484 4678 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T11:16:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5b8451c9f4f947a9f1cb2ae6a178ca8a596c2f02dbc8c14aa4b7c5db472c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T11:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T11:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.342228 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.342287 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.342300 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.342326 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.342342 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:00Z","lastTransitionTime":"2025-11-24T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.445443 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.445518 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.445544 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.445575 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.445596 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:00Z","lastTransitionTime":"2025-11-24T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.549882 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.550300 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.550499 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.550642 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.550814 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:00Z","lastTransitionTime":"2025-11-24T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.655200 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.655269 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.655286 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.655311 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.655331 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:00Z","lastTransitionTime":"2025-11-24T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.758049 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.758114 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.758128 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.758151 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.758166 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:00Z","lastTransitionTime":"2025-11-24T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.861151 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.861199 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.861215 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.861247 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.861269 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:00Z","lastTransitionTime":"2025-11-24T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.894847 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:00 crc kubenswrapper[4678]: E1124 11:18:00.895028 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.964328 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.964375 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.964388 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.964404 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:00 crc kubenswrapper[4678]: I1124 11:18:00.964415 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:00Z","lastTransitionTime":"2025-11-24T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.067388 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.067424 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.067434 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.067449 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.067459 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:01Z","lastTransitionTime":"2025-11-24T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.170705 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.170756 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.170769 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.170791 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.170806 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:01Z","lastTransitionTime":"2025-11-24T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.274437 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.274508 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.274526 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.274552 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.274573 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:01Z","lastTransitionTime":"2025-11-24T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.378199 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.378257 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.378269 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.378291 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.378303 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:01Z","lastTransitionTime":"2025-11-24T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.481573 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.481626 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.481636 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.481657 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.481691 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:01Z","lastTransitionTime":"2025-11-24T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.585154 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.585219 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.585230 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.585249 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.585261 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:01Z","lastTransitionTime":"2025-11-24T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.689817 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.689890 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.689908 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.689935 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.689954 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:01Z","lastTransitionTime":"2025-11-24T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.794391 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.794465 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.794488 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.794519 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.794542 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:01Z","lastTransitionTime":"2025-11-24T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.895084 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.895154 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.895161 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:01 crc kubenswrapper[4678]: E1124 11:18:01.895759 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:01 crc kubenswrapper[4678]: E1124 11:18:01.895925 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:01 crc kubenswrapper[4678]: E1124 11:18:01.896008 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.897305 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.897344 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.897357 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.897376 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:01 crc kubenswrapper[4678]: I1124 11:18:01.897390 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:01Z","lastTransitionTime":"2025-11-24T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.000984 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.001069 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.001097 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.001134 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.001156 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:02Z","lastTransitionTime":"2025-11-24T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.105483 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.105556 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.105578 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.105606 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.105626 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:02Z","lastTransitionTime":"2025-11-24T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.208723 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.208795 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.208812 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.208839 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.208856 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:02Z","lastTransitionTime":"2025-11-24T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.311816 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.311885 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.311907 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.311937 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.311959 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:02Z","lastTransitionTime":"2025-11-24T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.416198 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.416796 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.417007 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.417170 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.417355 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:02Z","lastTransitionTime":"2025-11-24T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.521321 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.521402 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.521496 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.521533 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.521563 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:02Z","lastTransitionTime":"2025-11-24T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.625462 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.625541 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.625561 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.625592 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.625612 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:02Z","lastTransitionTime":"2025-11-24T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.729242 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.729351 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.729379 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.729420 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.729449 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:02Z","lastTransitionTime":"2025-11-24T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.832955 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.833039 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.833068 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.833103 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.833133 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:02Z","lastTransitionTime":"2025-11-24T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.895092 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:02 crc kubenswrapper[4678]: E1124 11:18:02.895299 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.937050 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.937126 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.937144 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.937171 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:02 crc kubenswrapper[4678]: I1124 11:18:02.937189 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:02Z","lastTransitionTime":"2025-11-24T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.041239 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.041301 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.041315 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.041338 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.041356 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:03Z","lastTransitionTime":"2025-11-24T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.144754 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.144814 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.144826 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.144846 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.144858 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:03Z","lastTransitionTime":"2025-11-24T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.248075 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.248175 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.248201 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.248235 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.248263 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:03Z","lastTransitionTime":"2025-11-24T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.351775 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.351883 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.351914 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.351942 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.351961 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:03Z","lastTransitionTime":"2025-11-24T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.455040 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.455083 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.455093 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.455112 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.455123 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:03Z","lastTransitionTime":"2025-11-24T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.558238 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.558306 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.558318 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.558341 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.558357 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:03Z","lastTransitionTime":"2025-11-24T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.662493 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.662570 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.662589 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.662616 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.662638 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:03Z","lastTransitionTime":"2025-11-24T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.766008 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.766302 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.766334 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.766368 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.766392 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:03Z","lastTransitionTime":"2025-11-24T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.869356 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.869413 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.869424 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.869446 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.869463 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:03Z","lastTransitionTime":"2025-11-24T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.895309 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.895394 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.895333 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:03 crc kubenswrapper[4678]: E1124 11:18:03.895506 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:03 crc kubenswrapper[4678]: E1124 11:18:03.895644 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:03 crc kubenswrapper[4678]: E1124 11:18:03.895782 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.973246 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.973330 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.973352 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.973380 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:03 crc kubenswrapper[4678]: I1124 11:18:03.973402 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:03Z","lastTransitionTime":"2025-11-24T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.076896 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.076959 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.076976 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.077000 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.077020 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:04Z","lastTransitionTime":"2025-11-24T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.180529 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.180751 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.180787 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.180814 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.180833 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:04Z","lastTransitionTime":"2025-11-24T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.284229 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.284300 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.284323 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.284360 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.284384 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:04Z","lastTransitionTime":"2025-11-24T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.388110 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.388162 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.388174 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.388192 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.388207 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:04Z","lastTransitionTime":"2025-11-24T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.491336 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.491407 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.491426 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.491459 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.491526 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:04Z","lastTransitionTime":"2025-11-24T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.594585 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.594664 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.594730 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.594767 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.594792 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:04Z","lastTransitionTime":"2025-11-24T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.698410 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.698488 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.698509 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.698545 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.698565 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:04Z","lastTransitionTime":"2025-11-24T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.802121 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.802180 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.802192 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.802216 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.802231 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:04Z","lastTransitionTime":"2025-11-24T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.895710 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:04 crc kubenswrapper[4678]: E1124 11:18:04.895968 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.905374 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.905434 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.905447 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.905468 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:04 crc kubenswrapper[4678]: I1124 11:18:04.905486 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:04Z","lastTransitionTime":"2025-11-24T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.008777 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.008928 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.008951 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.008977 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.008996 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:05Z","lastTransitionTime":"2025-11-24T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.112187 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.112248 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.112263 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.112291 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.112307 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:05Z","lastTransitionTime":"2025-11-24T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.216308 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.216386 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.216409 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.216485 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.216510 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:05Z","lastTransitionTime":"2025-11-24T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.319464 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.319512 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.319524 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.319544 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.319564 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:05Z","lastTransitionTime":"2025-11-24T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.422951 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.423016 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.423034 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.423062 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.423084 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:05Z","lastTransitionTime":"2025-11-24T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.526310 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.526393 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.526413 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.526436 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.526450 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:05Z","lastTransitionTime":"2025-11-24T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.630136 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.630248 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.630298 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.630325 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.630343 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:05Z","lastTransitionTime":"2025-11-24T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.733531 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.733576 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.733588 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.733608 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.733622 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:05Z","lastTransitionTime":"2025-11-24T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.836364 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.836462 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.836480 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.836500 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.836515 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:05Z","lastTransitionTime":"2025-11-24T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.895587 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:05 crc kubenswrapper[4678]: E1124 11:18:05.895848 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.895913 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.896273 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:05 crc kubenswrapper[4678]: E1124 11:18:05.896506 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:05 crc kubenswrapper[4678]: E1124 11:18:05.896647 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.939815 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.939867 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.939876 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.939895 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:05 crc kubenswrapper[4678]: I1124 11:18:05.939907 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:05Z","lastTransitionTime":"2025-11-24T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.043135 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.043198 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.043222 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.043252 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.043279 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:06Z","lastTransitionTime":"2025-11-24T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.147607 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.147722 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.147749 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.147779 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.147801 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:06Z","lastTransitionTime":"2025-11-24T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.250821 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.250894 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.250913 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.250941 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.250959 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:06Z","lastTransitionTime":"2025-11-24T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.354581 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.354662 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.354728 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.354762 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.354782 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:06Z","lastTransitionTime":"2025-11-24T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.461266 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.461340 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.461362 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.461390 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.461407 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:06Z","lastTransitionTime":"2025-11-24T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.564865 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.564935 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.564953 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.564978 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.564995 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:06Z","lastTransitionTime":"2025-11-24T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.668604 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.668772 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.668794 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.668823 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.668840 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:06Z","lastTransitionTime":"2025-11-24T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.773018 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.773156 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.773175 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.773213 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.773232 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:06Z","lastTransitionTime":"2025-11-24T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.876941 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.877026 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.877044 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.877070 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.877092 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:06Z","lastTransitionTime":"2025-11-24T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.895661 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:06 crc kubenswrapper[4678]: E1124 11:18:06.896327 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.897460 4678 scope.go:117] "RemoveContainer" containerID="ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938" Nov 24 11:18:06 crc kubenswrapper[4678]: E1124 11:18:06.897765 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.980993 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.981091 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.981111 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.981137 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:06 crc kubenswrapper[4678]: I1124 11:18:06.981157 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:06Z","lastTransitionTime":"2025-11-24T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.084156 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.084217 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.084236 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.084263 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.084283 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:07Z","lastTransitionTime":"2025-11-24T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.187579 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.187631 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.187642 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.187660 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.187700 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:07Z","lastTransitionTime":"2025-11-24T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.290038 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.290151 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.290172 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.290215 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.290233 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:07Z","lastTransitionTime":"2025-11-24T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.393018 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.393065 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.393074 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.393090 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.393101 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:07Z","lastTransitionTime":"2025-11-24T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.495982 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.496033 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.496044 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.496061 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.496073 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:07Z","lastTransitionTime":"2025-11-24T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.598362 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.598408 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.598433 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.598448 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.598459 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:07Z","lastTransitionTime":"2025-11-24T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.702021 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.702144 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.702164 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.702188 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.702236 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:07Z","lastTransitionTime":"2025-11-24T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.806053 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.806140 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.806163 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.806195 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.806218 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:07Z","lastTransitionTime":"2025-11-24T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.895518 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:07 crc kubenswrapper[4678]: E1124 11:18:07.895802 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.895875 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.895983 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:07 crc kubenswrapper[4678]: E1124 11:18:07.896054 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:07 crc kubenswrapper[4678]: E1124 11:18:07.896189 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.909425 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.909494 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.909515 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.909541 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:07 crc kubenswrapper[4678]: I1124 11:18:07.909563 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:07Z","lastTransitionTime":"2025-11-24T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.013555 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.013650 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.013711 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.013749 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.013775 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:08Z","lastTransitionTime":"2025-11-24T11:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.116760 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.116816 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.116833 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.116856 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.116874 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:08Z","lastTransitionTime":"2025-11-24T11:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.220075 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.220166 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.220187 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.220217 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.220240 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:08Z","lastTransitionTime":"2025-11-24T11:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.324148 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.324217 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.324236 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.324261 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.324281 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:08Z","lastTransitionTime":"2025-11-24T11:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.427973 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.428050 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.428074 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.428105 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.428126 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:08Z","lastTransitionTime":"2025-11-24T11:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.531247 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.531344 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.531362 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.531389 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.531407 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:08Z","lastTransitionTime":"2025-11-24T11:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.533883 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.533939 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.533957 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.533985 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.534002 4678 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T11:18:08Z","lastTransitionTime":"2025-11-24T11:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.615395 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t"] Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.615918 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.619796 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.620309 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.620556 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.620930 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.673870 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=15.673830747 podStartE2EDuration="15.673830747s" podCreationTimestamp="2025-11-24 11:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:08.641246546 +0000 UTC m=+99.572306225" watchObservedRunningTime="2025-11-24 11:18:08.673830747 +0000 UTC m=+99.604890426" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.695888 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=17.695858093 podStartE2EDuration="17.695858093s" podCreationTimestamp="2025-11-24 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:08.674793774 +0000 UTC m=+99.605853463" watchObservedRunningTime="2025-11-24 11:18:08.695858093 +0000 UTC m=+99.626917772" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.716059 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-h24xv" podStartSLOduration=77.716024785 podStartE2EDuration="1m17.716024785s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:08.69578131 +0000 UTC m=+99.626840949" watchObservedRunningTime="2025-11-24 11:18:08.716024785 +0000 UTC m=+99.647084474" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.722169 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d2a700-2c3a-4adc-843f-7547f4cc9f9e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pgz4t\" (UID: \"70d2a700-2c3a-4adc-843f-7547f4cc9f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.722552 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/70d2a700-2c3a-4adc-843f-7547f4cc9f9e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pgz4t\" (UID: \"70d2a700-2c3a-4adc-843f-7547f4cc9f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.723035 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70d2a700-2c3a-4adc-843f-7547f4cc9f9e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pgz4t\" (UID: \"70d2a700-2c3a-4adc-843f-7547f4cc9f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.723373 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/70d2a700-2c3a-4adc-843f-7547f4cc9f9e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pgz4t\" (UID: \"70d2a700-2c3a-4adc-843f-7547f4cc9f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.723610 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70d2a700-2c3a-4adc-843f-7547f4cc9f9e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pgz4t\" (UID: \"70d2a700-2c3a-4adc-843f-7547f4cc9f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.812781 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podStartSLOduration=77.812747468 podStartE2EDuration="1m17.812747468s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:08.81209942 +0000 UTC m=+99.743159079" watchObservedRunningTime="2025-11-24 11:18:08.812747468 +0000 UTC m=+99.743807117" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.824594 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70d2a700-2c3a-4adc-843f-7547f4cc9f9e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pgz4t\" (UID: \"70d2a700-2c3a-4adc-843f-7547f4cc9f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.824885 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d2a700-2c3a-4adc-843f-7547f4cc9f9e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pgz4t\" (UID: \"70d2a700-2c3a-4adc-843f-7547f4cc9f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.824988 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/70d2a700-2c3a-4adc-843f-7547f4cc9f9e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pgz4t\" (UID: \"70d2a700-2c3a-4adc-843f-7547f4cc9f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.825069 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/70d2a700-2c3a-4adc-843f-7547f4cc9f9e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pgz4t\" (UID: \"70d2a700-2c3a-4adc-843f-7547f4cc9f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.825210 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70d2a700-2c3a-4adc-843f-7547f4cc9f9e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pgz4t\" (UID: \"70d2a700-2c3a-4adc-843f-7547f4cc9f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.825343 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/70d2a700-2c3a-4adc-843f-7547f4cc9f9e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pgz4t\" (UID: \"70d2a700-2c3a-4adc-843f-7547f4cc9f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.825478 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/70d2a700-2c3a-4adc-843f-7547f4cc9f9e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pgz4t\" (UID: \"70d2a700-2c3a-4adc-843f-7547f4cc9f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.826740 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70d2a700-2c3a-4adc-843f-7547f4cc9f9e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pgz4t\" (UID: \"70d2a700-2c3a-4adc-843f-7547f4cc9f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.842208 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d2a700-2c3a-4adc-843f-7547f4cc9f9e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pgz4t\" (UID: \"70d2a700-2c3a-4adc-843f-7547f4cc9f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.847354 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zdtgc" podStartSLOduration=77.847328837 podStartE2EDuration="1m17.847328837s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:08.83046674 +0000 UTC m=+99.761526379" watchObservedRunningTime="2025-11-24 11:18:08.847328837 +0000 UTC m=+99.778388476" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.854632 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70d2a700-2c3a-4adc-843f-7547f4cc9f9e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pgz4t\" (UID: \"70d2a700-2c3a-4adc-843f-7547f4cc9f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.888690 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-snkj4" podStartSLOduration=77.88864779 podStartE2EDuration="1m17.88864779s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:08.888538687 +0000 UTC m=+99.819598346" watchObservedRunningTime="2025-11-24 11:18:08.88864779 +0000 UTC m=+99.819707429" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.895077 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:08 crc kubenswrapper[4678]: E1124 11:18:08.895183 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.924636 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7tnrj" podStartSLOduration=77.924610059 podStartE2EDuration="1m17.924610059s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:08.909799661 +0000 UTC m=+99.840859310" watchObservedRunningTime="2025-11-24 11:18:08.924610059 +0000 UTC m=+99.855669708" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.925198 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=75.925190915 podStartE2EDuration="1m15.925190915s" podCreationTimestamp="2025-11-24 11:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:08.924403023 +0000 UTC m=+99.855462682" watchObservedRunningTime="2025-11-24 11:18:08.925190915 +0000 UTC m=+99.856250564" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.935510 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.941088 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=41.941056704 podStartE2EDuration="41.941056704s" podCreationTimestamp="2025-11-24 11:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:08.940933251 +0000 UTC m=+99.871992910" watchObservedRunningTime="2025-11-24 11:18:08.941056704 +0000 UTC m=+99.872116383" Nov 24 11:18:08 crc kubenswrapper[4678]: I1124 11:18:08.987779 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.987752892 podStartE2EDuration="1m18.987752892s" podCreationTimestamp="2025-11-24 11:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:08.966878299 +0000 UTC m=+99.897937948" watchObservedRunningTime="2025-11-24 11:18:08.987752892 +0000 UTC m=+99.918812541" Nov 24 11:18:09 crc kubenswrapper[4678]: I1124 11:18:09.070333 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7twxw" podStartSLOduration=78.070310887 podStartE2EDuration="1m18.070310887s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:09.070009438 +0000 UTC m=+100.001069117" watchObservedRunningTime="2025-11-24 11:18:09.070310887 +0000 UTC m=+100.001370526" Nov 24 11:18:09 crc kubenswrapper[4678]: I1124 11:18:09.499962 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" event={"ID":"70d2a700-2c3a-4adc-843f-7547f4cc9f9e","Type":"ContainerStarted","Data":"02dad668f8d975459a86a8eaeca2543154a149f2e7b3e17ce4e32ccb20fc7a31"} Nov 24 11:18:09 crc kubenswrapper[4678]: I1124 11:18:09.500073 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" event={"ID":"70d2a700-2c3a-4adc-843f-7547f4cc9f9e","Type":"ContainerStarted","Data":"0b4e5c1e9131ffb88d0e01e7ecc6c6f07d7a677f02e07d0026e8975ef2d3f1e8"} Nov 24 11:18:09 crc kubenswrapper[4678]: I1124 11:18:09.837027 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs\") pod \"network-metrics-daemon-pg6bk\" (UID: \"dca80848-6c0a-4946-980a-197e2ecfc898\") " pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:09 crc kubenswrapper[4678]: E1124 11:18:09.837237 4678 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:18:09 crc kubenswrapper[4678]: E1124 11:18:09.837367 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs podName:dca80848-6c0a-4946-980a-197e2ecfc898 nodeName:}" failed. No retries permitted until 2025-11-24 11:19:13.837331939 +0000 UTC m=+164.768391618 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs") pod "network-metrics-daemon-pg6bk" (UID: "dca80848-6c0a-4946-980a-197e2ecfc898") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 11:18:09 crc kubenswrapper[4678]: I1124 11:18:09.895265 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:09 crc kubenswrapper[4678]: I1124 11:18:09.895432 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:09 crc kubenswrapper[4678]: E1124 11:18:09.897383 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:09 crc kubenswrapper[4678]: I1124 11:18:09.897448 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:09 crc kubenswrapper[4678]: E1124 11:18:09.897645 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:09 crc kubenswrapper[4678]: E1124 11:18:09.897896 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:10 crc kubenswrapper[4678]: I1124 11:18:10.895154 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:10 crc kubenswrapper[4678]: E1124 11:18:10.895395 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:11 crc kubenswrapper[4678]: I1124 11:18:11.895229 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:11 crc kubenswrapper[4678]: I1124 11:18:11.895267 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:11 crc kubenswrapper[4678]: I1124 11:18:11.895429 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:11 crc kubenswrapper[4678]: E1124 11:18:11.895570 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:11 crc kubenswrapper[4678]: E1124 11:18:11.895830 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:11 crc kubenswrapper[4678]: E1124 11:18:11.895911 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:12 crc kubenswrapper[4678]: I1124 11:18:12.895096 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:12 crc kubenswrapper[4678]: E1124 11:18:12.895277 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:13 crc kubenswrapper[4678]: I1124 11:18:13.894536 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:13 crc kubenswrapper[4678]: I1124 11:18:13.894688 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:13 crc kubenswrapper[4678]: E1124 11:18:13.894776 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:13 crc kubenswrapper[4678]: I1124 11:18:13.894870 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:13 crc kubenswrapper[4678]: E1124 11:18:13.895036 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:13 crc kubenswrapper[4678]: E1124 11:18:13.895208 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:14 crc kubenswrapper[4678]: I1124 11:18:14.894524 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:14 crc kubenswrapper[4678]: E1124 11:18:14.894800 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:15 crc kubenswrapper[4678]: I1124 11:18:15.895722 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:15 crc kubenswrapper[4678]: I1124 11:18:15.895945 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:15 crc kubenswrapper[4678]: E1124 11:18:15.896129 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:15 crc kubenswrapper[4678]: I1124 11:18:15.896579 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:15 crc kubenswrapper[4678]: E1124 11:18:15.896587 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:15 crc kubenswrapper[4678]: E1124 11:18:15.897057 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:16 crc kubenswrapper[4678]: I1124 11:18:16.895931 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:16 crc kubenswrapper[4678]: E1124 11:18:16.896139 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:17 crc kubenswrapper[4678]: I1124 11:18:17.895070 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:17 crc kubenswrapper[4678]: I1124 11:18:17.895140 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:17 crc kubenswrapper[4678]: E1124 11:18:17.895234 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:17 crc kubenswrapper[4678]: I1124 11:18:17.895357 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:17 crc kubenswrapper[4678]: E1124 11:18:17.895507 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:17 crc kubenswrapper[4678]: E1124 11:18:17.896523 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:18 crc kubenswrapper[4678]: I1124 11:18:18.895297 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:18 crc kubenswrapper[4678]: E1124 11:18:18.895846 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:18 crc kubenswrapper[4678]: I1124 11:18:18.896000 4678 scope.go:117] "RemoveContainer" containerID="ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938" Nov 24 11:18:18 crc kubenswrapper[4678]: E1124 11:18:18.896220 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" Nov 24 11:18:19 crc kubenswrapper[4678]: I1124 11:18:19.894856 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:19 crc kubenswrapper[4678]: I1124 11:18:19.894856 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:19 crc kubenswrapper[4678]: E1124 11:18:19.896854 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:19 crc kubenswrapper[4678]: I1124 11:18:19.896911 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:19 crc kubenswrapper[4678]: E1124 11:18:19.897210 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:19 crc kubenswrapper[4678]: E1124 11:18:19.897372 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:20 crc kubenswrapper[4678]: I1124 11:18:20.894987 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:20 crc kubenswrapper[4678]: E1124 11:18:20.895184 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:21 crc kubenswrapper[4678]: I1124 11:18:21.896032 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:21 crc kubenswrapper[4678]: I1124 11:18:21.896095 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:21 crc kubenswrapper[4678]: I1124 11:18:21.896201 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:21 crc kubenswrapper[4678]: E1124 11:18:21.896347 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:21 crc kubenswrapper[4678]: E1124 11:18:21.896712 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:21 crc kubenswrapper[4678]: E1124 11:18:21.896593 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:22 crc kubenswrapper[4678]: I1124 11:18:22.894520 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:22 crc kubenswrapper[4678]: E1124 11:18:22.894782 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:23 crc kubenswrapper[4678]: I1124 11:18:23.895201 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:23 crc kubenswrapper[4678]: I1124 11:18:23.895200 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:23 crc kubenswrapper[4678]: E1124 11:18:23.895441 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:23 crc kubenswrapper[4678]: I1124 11:18:23.895518 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:23 crc kubenswrapper[4678]: E1124 11:18:23.895798 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:23 crc kubenswrapper[4678]: E1124 11:18:23.896020 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:24 crc kubenswrapper[4678]: I1124 11:18:24.895948 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:24 crc kubenswrapper[4678]: E1124 11:18:24.896608 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:25 crc kubenswrapper[4678]: I1124 11:18:25.895639 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:25 crc kubenswrapper[4678]: I1124 11:18:25.895938 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:25 crc kubenswrapper[4678]: E1124 11:18:25.896423 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:25 crc kubenswrapper[4678]: I1124 11:18:25.895951 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:25 crc kubenswrapper[4678]: E1124 11:18:25.896608 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:25 crc kubenswrapper[4678]: E1124 11:18:25.896657 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:26 crc kubenswrapper[4678]: I1124 11:18:26.575384 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h24xv_f159c812-75d9-4ad6-9e20-4d208ffe42fb/kube-multus/1.log" Nov 24 11:18:26 crc kubenswrapper[4678]: I1124 11:18:26.576319 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h24xv_f159c812-75d9-4ad6-9e20-4d208ffe42fb/kube-multus/0.log" Nov 24 11:18:26 crc kubenswrapper[4678]: I1124 11:18:26.576404 4678 generic.go:334] "Generic (PLEG): container finished" podID="f159c812-75d9-4ad6-9e20-4d208ffe42fb" containerID="d533b7bca5d15993708d525de6488e5c07fddad973c2148c82257608bf32e801" exitCode=1 Nov 24 11:18:26 crc kubenswrapper[4678]: I1124 11:18:26.576706 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h24xv" event={"ID":"f159c812-75d9-4ad6-9e20-4d208ffe42fb","Type":"ContainerDied","Data":"d533b7bca5d15993708d525de6488e5c07fddad973c2148c82257608bf32e801"} Nov 24 11:18:26 crc kubenswrapper[4678]: I1124 11:18:26.576773 4678 scope.go:117] "RemoveContainer" containerID="8662094eeb2c4ecff74e2c36f93b6738879a6767c3c665fbe0cfb06601064f71" Nov 24 11:18:26 crc kubenswrapper[4678]: I1124 11:18:26.577390 4678 scope.go:117] "RemoveContainer" containerID="d533b7bca5d15993708d525de6488e5c07fddad973c2148c82257608bf32e801" Nov 24 11:18:26 crc kubenswrapper[4678]: E1124 11:18:26.577605 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-h24xv_openshift-multus(f159c812-75d9-4ad6-9e20-4d208ffe42fb)\"" pod="openshift-multus/multus-h24xv" podUID="f159c812-75d9-4ad6-9e20-4d208ffe42fb" Nov 24 11:18:26 crc kubenswrapper[4678]: I1124 11:18:26.593920 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgz4t" podStartSLOduration=95.593899211 podStartE2EDuration="1m35.593899211s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:09.52575053 +0000 UTC m=+100.456810189" watchObservedRunningTime="2025-11-24 11:18:26.593899211 +0000 UTC m=+117.524958870" Nov 24 11:18:26 crc kubenswrapper[4678]: I1124 11:18:26.894805 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:26 crc kubenswrapper[4678]: E1124 11:18:26.895035 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:27 crc kubenswrapper[4678]: I1124 11:18:27.583406 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h24xv_f159c812-75d9-4ad6-9e20-4d208ffe42fb/kube-multus/1.log" Nov 24 11:18:27 crc kubenswrapper[4678]: I1124 11:18:27.895256 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:27 crc kubenswrapper[4678]: I1124 11:18:27.895310 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:27 crc kubenswrapper[4678]: E1124 11:18:27.895494 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:27 crc kubenswrapper[4678]: I1124 11:18:27.895855 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:27 crc kubenswrapper[4678]: E1124 11:18:27.895998 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:27 crc kubenswrapper[4678]: E1124 11:18:27.896304 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:28 crc kubenswrapper[4678]: I1124 11:18:28.894825 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:28 crc kubenswrapper[4678]: E1124 11:18:28.895018 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:29 crc kubenswrapper[4678]: I1124 11:18:29.895230 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:29 crc kubenswrapper[4678]: I1124 11:18:29.895282 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:29 crc kubenswrapper[4678]: E1124 11:18:29.898176 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:29 crc kubenswrapper[4678]: I1124 11:18:29.898213 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:29 crc kubenswrapper[4678]: E1124 11:18:29.898358 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:29 crc kubenswrapper[4678]: E1124 11:18:29.898567 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:29 crc kubenswrapper[4678]: I1124 11:18:29.899766 4678 scope.go:117] "RemoveContainer" containerID="ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938" Nov 24 11:18:29 crc kubenswrapper[4678]: E1124 11:18:29.900041 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zsq5s_openshift-ovn-kubernetes(318b13d4-6c61-4b45-bb2f-0a7e243946a6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" Nov 24 11:18:29 crc kubenswrapper[4678]: E1124 11:18:29.912733 4678 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 24 11:18:30 crc kubenswrapper[4678]: E1124 11:18:30.006750 4678 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:18:30 crc kubenswrapper[4678]: I1124 11:18:30.895260 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:30 crc kubenswrapper[4678]: E1124 11:18:30.895550 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:31 crc kubenswrapper[4678]: I1124 11:18:31.894712 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:31 crc kubenswrapper[4678]: I1124 11:18:31.894756 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:31 crc kubenswrapper[4678]: E1124 11:18:31.894926 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:31 crc kubenswrapper[4678]: E1124 11:18:31.895211 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:31 crc kubenswrapper[4678]: I1124 11:18:31.895504 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:31 crc kubenswrapper[4678]: E1124 11:18:31.896387 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:32 crc kubenswrapper[4678]: I1124 11:18:32.895564 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:32 crc kubenswrapper[4678]: E1124 11:18:32.895849 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:33 crc kubenswrapper[4678]: I1124 11:18:33.895785 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:33 crc kubenswrapper[4678]: I1124 11:18:33.895853 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:33 crc kubenswrapper[4678]: E1124 11:18:33.896027 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:33 crc kubenswrapper[4678]: E1124 11:18:33.896466 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:33 crc kubenswrapper[4678]: I1124 11:18:33.896755 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:33 crc kubenswrapper[4678]: E1124 11:18:33.896870 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:34 crc kubenswrapper[4678]: I1124 11:18:34.894846 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:34 crc kubenswrapper[4678]: E1124 11:18:34.895065 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:35 crc kubenswrapper[4678]: E1124 11:18:35.008469 4678 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:18:35 crc kubenswrapper[4678]: I1124 11:18:35.895769 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:35 crc kubenswrapper[4678]: I1124 11:18:35.895808 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:35 crc kubenswrapper[4678]: E1124 11:18:35.896022 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:35 crc kubenswrapper[4678]: E1124 11:18:35.896220 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:35 crc kubenswrapper[4678]: I1124 11:18:35.896569 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:35 crc kubenswrapper[4678]: E1124 11:18:35.896949 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:36 crc kubenswrapper[4678]: I1124 11:18:36.894726 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:36 crc kubenswrapper[4678]: E1124 11:18:36.895019 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:37 crc kubenswrapper[4678]: I1124 11:18:37.894605 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:37 crc kubenswrapper[4678]: I1124 11:18:37.894605 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:37 crc kubenswrapper[4678]: E1124 11:18:37.894884 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:37 crc kubenswrapper[4678]: E1124 11:18:37.894780 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:37 crc kubenswrapper[4678]: I1124 11:18:37.895499 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:37 crc kubenswrapper[4678]: E1124 11:18:37.895654 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:38 crc kubenswrapper[4678]: I1124 11:18:38.894520 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:38 crc kubenswrapper[4678]: E1124 11:18:38.894794 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:39 crc kubenswrapper[4678]: I1124 11:18:39.895107 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:39 crc kubenswrapper[4678]: I1124 11:18:39.896179 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:39 crc kubenswrapper[4678]: I1124 11:18:39.896212 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:39 crc kubenswrapper[4678]: I1124 11:18:39.896341 4678 scope.go:117] "RemoveContainer" containerID="d533b7bca5d15993708d525de6488e5c07fddad973c2148c82257608bf32e801" Nov 24 11:18:39 crc kubenswrapper[4678]: E1124 11:18:39.896399 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:39 crc kubenswrapper[4678]: E1124 11:18:39.896543 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:39 crc kubenswrapper[4678]: E1124 11:18:39.896730 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:40 crc kubenswrapper[4678]: E1124 11:18:40.009496 4678 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:18:40 crc kubenswrapper[4678]: I1124 11:18:40.642302 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h24xv_f159c812-75d9-4ad6-9e20-4d208ffe42fb/kube-multus/1.log" Nov 24 11:18:40 crc kubenswrapper[4678]: I1124 11:18:40.642376 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h24xv" event={"ID":"f159c812-75d9-4ad6-9e20-4d208ffe42fb","Type":"ContainerStarted","Data":"8bab327ee33ef6b6764f09a9c29750d42a06fb26d0580431da74c25580a9d952"} Nov 24 11:18:40 crc kubenswrapper[4678]: I1124 11:18:40.895206 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:40 crc kubenswrapper[4678]: E1124 11:18:40.895452 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:40 crc kubenswrapper[4678]: I1124 11:18:40.896437 4678 scope.go:117] "RemoveContainer" containerID="ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938" Nov 24 11:18:41 crc kubenswrapper[4678]: I1124 11:18:41.648921 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovnkube-controller/3.log" Nov 24 11:18:41 crc kubenswrapper[4678]: I1124 11:18:41.653183 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerStarted","Data":"0a974bbe7632470d424b26235d56421761fefeb71b2355e01b646decde9d5693"} Nov 24 11:18:41 crc kubenswrapper[4678]: I1124 11:18:41.653697 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:18:41 crc kubenswrapper[4678]: I1124 11:18:41.684661 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podStartSLOduration=110.684636081 podStartE2EDuration="1m50.684636081s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:41.683333084 +0000 UTC m=+132.614392743" watchObservedRunningTime="2025-11-24 11:18:41.684636081 +0000 UTC m=+132.615695720" Nov 24 11:18:41 crc kubenswrapper[4678]: I1124 11:18:41.795799 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pg6bk"] Nov 24 11:18:41 crc kubenswrapper[4678]: I1124 11:18:41.795974 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:41 crc kubenswrapper[4678]: E1124 11:18:41.796089 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:41 crc kubenswrapper[4678]: I1124 11:18:41.894861 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:41 crc kubenswrapper[4678]: I1124 11:18:41.894914 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:41 crc kubenswrapper[4678]: E1124 11:18:41.895027 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:41 crc kubenswrapper[4678]: E1124 11:18:41.895210 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:42 crc kubenswrapper[4678]: I1124 11:18:42.894590 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:42 crc kubenswrapper[4678]: E1124 11:18:42.895208 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:43 crc kubenswrapper[4678]: I1124 11:18:43.895392 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:43 crc kubenswrapper[4678]: I1124 11:18:43.895461 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:43 crc kubenswrapper[4678]: I1124 11:18:43.895487 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:43 crc kubenswrapper[4678]: E1124 11:18:43.897119 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pg6bk" podUID="dca80848-6c0a-4946-980a-197e2ecfc898" Nov 24 11:18:43 crc kubenswrapper[4678]: E1124 11:18:43.897285 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 11:18:43 crc kubenswrapper[4678]: E1124 11:18:43.897564 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 11:18:44 crc kubenswrapper[4678]: I1124 11:18:44.895395 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:44 crc kubenswrapper[4678]: E1124 11:18:44.895578 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 11:18:45 crc kubenswrapper[4678]: I1124 11:18:45.894910 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:45 crc kubenswrapper[4678]: I1124 11:18:45.895049 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:45 crc kubenswrapper[4678]: I1124 11:18:45.894941 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:18:45 crc kubenswrapper[4678]: I1124 11:18:45.898714 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 24 11:18:45 crc kubenswrapper[4678]: I1124 11:18:45.899361 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 24 11:18:45 crc kubenswrapper[4678]: I1124 11:18:45.899644 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 24 11:18:45 crc kubenswrapper[4678]: I1124 11:18:45.900277 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 24 11:18:46 crc kubenswrapper[4678]: I1124 11:18:46.895293 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:46 crc kubenswrapper[4678]: I1124 11:18:46.899170 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 24 11:18:46 crc kubenswrapper[4678]: I1124 11:18:46.899332 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.615975 4678 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.668375 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9sfxt"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.669055 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.674201 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.675515 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.675980 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.680812 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2qlj9"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.681514 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.682161 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.683589 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.684925 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kl8pj"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.685843 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.687285 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.688152 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.690497 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.691370 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.691759 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.702035 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.704596 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tf9mj"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.706908 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.724789 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.742397 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/430a7abd-f5ce-4886-b79a-436d715e3e1b-node-pullsecrets\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.742463 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f3ba498c-9fbe-43ab-82ea-0330759be0fa-etcd-client\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.742510 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3ba498c-9fbe-43ab-82ea-0330759be0fa-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.742537 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/430a7abd-f5ce-4886-b79a-436d715e3e1b-image-import-ca\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.742566 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/430a7abd-f5ce-4886-b79a-436d715e3e1b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.742592 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a44a8ca4-92df-406f-8ee7-37da7a5f6d8b-config\") pod \"machine-api-operator-5694c8668f-2qlj9\" (UID: \"a44a8ca4-92df-406f-8ee7-37da7a5f6d8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.742614 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f3ba498c-9fbe-43ab-82ea-0330759be0fa-encryption-config\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.742654 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-config\") pod \"controller-manager-879f6c89f-9sfxt\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.742702 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/430a7abd-f5ce-4886-b79a-436d715e3e1b-audit-dir\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.742731 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a44a8ca4-92df-406f-8ee7-37da7a5f6d8b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2qlj9\" (UID: \"a44a8ca4-92df-406f-8ee7-37da7a5f6d8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.742754 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1550d14-7d6b-43b9-bbbd-268b0274028a-config\") pod \"route-controller-manager-6576b87f9c-b4d2h\" (UID: \"b1550d14-7d6b-43b9-bbbd-268b0274028a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.742777 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4db5l\" (UniqueName: \"kubernetes.io/projected/f3ba498c-9fbe-43ab-82ea-0330759be0fa-kube-api-access-4db5l\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.742833 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a44a8ca4-92df-406f-8ee7-37da7a5f6d8b-images\") pod \"machine-api-operator-5694c8668f-2qlj9\" (UID: \"a44a8ca4-92df-406f-8ee7-37da7a5f6d8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.742840 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.742858 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3ba498c-9fbe-43ab-82ea-0330759be0fa-audit-policies\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743047 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/430a7abd-f5ce-4886-b79a-436d715e3e1b-config\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743070 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/430a7abd-f5ce-4886-b79a-436d715e3e1b-serving-cert\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743120 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/430a7abd-f5ce-4886-b79a-436d715e3e1b-etcd-client\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743142 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/430a7abd-f5ce-4886-b79a-436d715e3e1b-audit\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743163 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/430a7abd-f5ce-4886-b79a-436d715e3e1b-etcd-serving-ca\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743186 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsxtw\" (UniqueName: \"kubernetes.io/projected/dd1948d5-d633-4a92-a800-776add7a0894-kube-api-access-fsxtw\") pod \"controller-manager-879f6c89f-9sfxt\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743213 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bdkj\" (UniqueName: \"kubernetes.io/projected/b1550d14-7d6b-43b9-bbbd-268b0274028a-kube-api-access-4bdkj\") pod \"route-controller-manager-6576b87f9c-b4d2h\" (UID: \"b1550d14-7d6b-43b9-bbbd-268b0274028a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743233 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/430a7abd-f5ce-4886-b79a-436d715e3e1b-encryption-config\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743258 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f3ba498c-9fbe-43ab-82ea-0330759be0fa-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743281 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3ba498c-9fbe-43ab-82ea-0330759be0fa-audit-dir\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743317 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1948d5-d633-4a92-a800-776add7a0894-serving-cert\") pod \"controller-manager-879f6c89f-9sfxt\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743338 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9fqc\" (UniqueName: \"kubernetes.io/projected/430a7abd-f5ce-4886-b79a-436d715e3e1b-kube-api-access-h9fqc\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743362 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-client-ca\") pod \"controller-manager-879f6c89f-9sfxt\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743385 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1550d14-7d6b-43b9-bbbd-268b0274028a-serving-cert\") pod \"route-controller-manager-6576b87f9c-b4d2h\" (UID: \"b1550d14-7d6b-43b9-bbbd-268b0274028a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743409 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkqk6\" (UniqueName: \"kubernetes.io/projected/a44a8ca4-92df-406f-8ee7-37da7a5f6d8b-kube-api-access-nkqk6\") pod \"machine-api-operator-5694c8668f-2qlj9\" (UID: \"a44a8ca4-92df-406f-8ee7-37da7a5f6d8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743439 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3ba498c-9fbe-43ab-82ea-0330759be0fa-serving-cert\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743460 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1550d14-7d6b-43b9-bbbd-268b0274028a-client-ca\") pod \"route-controller-manager-6576b87f9c-b4d2h\" (UID: \"b1550d14-7d6b-43b9-bbbd-268b0274028a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.743482 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9sfxt\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.744037 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.745219 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-chw9t"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.745372 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.745821 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.746115 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.746967 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.750288 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.755151 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.755490 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.755770 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.755882 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.756055 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.756306 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.761732 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.761941 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.762245 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.762493 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.762842 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.763048 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.763208 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.763331 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.763444 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.763535 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.763612 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.763757 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.763875 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.764003 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.764117 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.764129 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.764222 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.764288 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.764463 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.764507 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.764646 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.764659 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.764815 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.764928 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.764992 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.765142 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.765239 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.765320 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.765410 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.765758 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.765792 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.765847 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.766221 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.768114 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.769347 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.773267 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.773844 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.773957 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.774173 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.774482 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.774495 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.774692 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.774831 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.774951 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.775030 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.776167 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.778738 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.779604 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jb7bk"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.780403 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.786879 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-zzwvq"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.787618 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.788088 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.788089 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zzwvq" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.788548 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.789814 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.826905 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.829316 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.829812 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.829866 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.830062 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.831151 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vcwcn"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.832638 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.833404 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.835405 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.844733 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.844790 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.844844 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3ba498c-9fbe-43ab-82ea-0330759be0fa-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.844873 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64cfe70c-3f37-4f26-b699-d8229dba4508-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-b2wdn\" (UID: \"64cfe70c-3f37-4f26-b699-d8229dba4508\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.844897 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twknq\" (UniqueName: \"kubernetes.io/projected/64cfe70c-3f37-4f26-b699-d8229dba4508-kube-api-access-twknq\") pod \"cluster-image-registry-operator-dc59b4c8b-b2wdn\" (UID: \"64cfe70c-3f37-4f26-b699-d8229dba4508\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.844923 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-955rz\" (UniqueName: \"kubernetes.io/projected/019dfbed-3859-4761-890e-cd8205747454-kube-api-access-955rz\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.844953 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/974b621b-6635-4ca8-b53d-b15ae31b51b0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wm72k\" (UID: \"974b621b-6635-4ca8-b53d-b15ae31b51b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.844974 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9216c066-ab74-4299-b586-92eba3e4d36a-machine-approver-tls\") pod \"machine-approver-56656f9798-g4p7d\" (UID: \"9216c066-ab74-4299-b586-92eba3e4d36a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.844997 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2169100e-5122-411b-9cb1-4d1ae0ebbd86-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6v64g\" (UID: \"2169100e-5122-411b-9cb1-4d1ae0ebbd86\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845025 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/430a7abd-f5ce-4886-b79a-436d715e3e1b-image-import-ca\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845048 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845082 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/430a7abd-f5ce-4886-b79a-436d715e3e1b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845126 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57abb356-60a5-43ec-8ab0-07e2198a494d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jb7bk\" (UID: \"57abb356-60a5-43ec-8ab0-07e2198a494d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845152 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-audit-policies\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845182 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-config\") pod \"controller-manager-879f6c89f-9sfxt\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845208 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a44a8ca4-92df-406f-8ee7-37da7a5f6d8b-config\") pod \"machine-api-operator-5694c8668f-2qlj9\" (UID: \"a44a8ca4-92df-406f-8ee7-37da7a5f6d8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845329 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f3ba498c-9fbe-43ab-82ea-0330759be0fa-encryption-config\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845351 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0acb73-5437-44f1-a83e-2a3781acce52-serving-cert\") pod \"openshift-config-operator-7777fb866f-xz9nm\" (UID: \"3d0acb73-5437-44f1-a83e-2a3781acce52\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845388 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-oauth-serving-cert\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845428 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/38101ae8-9e21-4a62-b839-cc42e0562769-console-oauth-config\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845474 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a44a8ca4-92df-406f-8ee7-37da7a5f6d8b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2qlj9\" (UID: \"a44a8ca4-92df-406f-8ee7-37da7a5f6d8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845505 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/430a7abd-f5ce-4886-b79a-436d715e3e1b-audit-dir\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845602 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/64cfe70c-3f37-4f26-b699-d8229dba4508-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-b2wdn\" (UID: \"64cfe70c-3f37-4f26-b699-d8229dba4508\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845631 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/019dfbed-3859-4761-890e-cd8205747454-audit-dir\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845653 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.845751 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rkrb2"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.861313 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.878029 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/430a7abd-f5ce-4886-b79a-436d715e3e1b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.879646 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/430a7abd-f5ce-4886-b79a-436d715e3e1b-image-import-ca\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.880013 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/430a7abd-f5ce-4886-b79a-436d715e3e1b-audit-dir\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.880333 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f8b8t"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.880860 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4db5l\" (UniqueName: \"kubernetes.io/projected/f3ba498c-9fbe-43ab-82ea-0330759be0fa-kube-api-access-4db5l\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.880919 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.880976 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1550d14-7d6b-43b9-bbbd-268b0274028a-config\") pod \"route-controller-manager-6576b87f9c-b4d2h\" (UID: \"b1550d14-7d6b-43b9-bbbd-268b0274028a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881009 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mknqk\" (UniqueName: \"kubernetes.io/projected/902681dd-c0f3-4fda-8d56-c3fff7e3fcec-kube-api-access-mknqk\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gb9k\" (UID: \"902681dd-c0f3-4fda-8d56-c3fff7e3fcec\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881079 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a44a8ca4-92df-406f-8ee7-37da7a5f6d8b-images\") pod \"machine-api-operator-5694c8668f-2qlj9\" (UID: \"a44a8ca4-92df-406f-8ee7-37da7a5f6d8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881112 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2169100e-5122-411b-9cb1-4d1ae0ebbd86-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6v64g\" (UID: \"2169100e-5122-411b-9cb1-4d1ae0ebbd86\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881133 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-f8b8t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881139 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3ba498c-9fbe-43ab-82ea-0330759be0fa-audit-policies\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881165 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/430a7abd-f5ce-4886-b79a-436d715e3e1b-config\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881217 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/430a7abd-f5ce-4886-b79a-436d715e3e1b-serving-cert\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881238 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjrt9\" (UniqueName: \"kubernetes.io/projected/9216c066-ab74-4299-b586-92eba3e4d36a-kube-api-access-pjrt9\") pod \"machine-approver-56656f9798-g4p7d\" (UID: \"9216c066-ab74-4299-b586-92eba3e4d36a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881267 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/430a7abd-f5ce-4886-b79a-436d715e3e1b-etcd-client\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881286 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57abb356-60a5-43ec-8ab0-07e2198a494d-config\") pod \"authentication-operator-69f744f599-jb7bk\" (UID: \"57abb356-60a5-43ec-8ab0-07e2198a494d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881309 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/902681dd-c0f3-4fda-8d56-c3fff7e3fcec-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gb9k\" (UID: \"902681dd-c0f3-4fda-8d56-c3fff7e3fcec\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881327 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99n7d\" (UniqueName: \"kubernetes.io/projected/2169100e-5122-411b-9cb1-4d1ae0ebbd86-kube-api-access-99n7d\") pod \"openshift-apiserver-operator-796bbdcf4f-6v64g\" (UID: \"2169100e-5122-411b-9cb1-4d1ae0ebbd86\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881345 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881363 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/430a7abd-f5ce-4886-b79a-436d715e3e1b-audit\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881380 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/430a7abd-f5ce-4886-b79a-436d715e3e1b-etcd-serving-ca\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881397 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-console-config\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881420 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-trusted-ca-bundle\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881439 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsxtw\" (UniqueName: \"kubernetes.io/projected/dd1948d5-d633-4a92-a800-776add7a0894-kube-api-access-fsxtw\") pod \"controller-manager-879f6c89f-9sfxt\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881461 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g77f\" (UniqueName: \"kubernetes.io/projected/57abb356-60a5-43ec-8ab0-07e2198a494d-kube-api-access-7g77f\") pod \"authentication-operator-69f744f599-jb7bk\" (UID: \"57abb356-60a5-43ec-8ab0-07e2198a494d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881478 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/38101ae8-9e21-4a62-b839-cc42e0562769-console-serving-cert\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881497 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881518 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64cfe70c-3f37-4f26-b699-d8229dba4508-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-b2wdn\" (UID: \"64cfe70c-3f37-4f26-b699-d8229dba4508\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881136 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rkrb2" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881586 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3d0acb73-5437-44f1-a83e-2a3781acce52-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xz9nm\" (UID: \"3d0acb73-5437-44f1-a83e-2a3781acce52\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881608 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9216c066-ab74-4299-b586-92eba3e4d36a-auth-proxy-config\") pod \"machine-approver-56656f9798-g4p7d\" (UID: \"9216c066-ab74-4299-b586-92eba3e4d36a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881634 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bdkj\" (UniqueName: \"kubernetes.io/projected/b1550d14-7d6b-43b9-bbbd-268b0274028a-kube-api-access-4bdkj\") pod \"route-controller-manager-6576b87f9c-b4d2h\" (UID: \"b1550d14-7d6b-43b9-bbbd-268b0274028a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881653 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/430a7abd-f5ce-4886-b79a-436d715e3e1b-encryption-config\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881672 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9216c066-ab74-4299-b586-92eba3e4d36a-config\") pod \"machine-approver-56656f9798-g4p7d\" (UID: \"9216c066-ab74-4299-b586-92eba3e4d36a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881705 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881724 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881741 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881767 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f3ba498c-9fbe-43ab-82ea-0330759be0fa-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881783 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3ba498c-9fbe-43ab-82ea-0330759be0fa-audit-dir\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881805 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57abb356-60a5-43ec-8ab0-07e2198a494d-service-ca-bundle\") pod \"authentication-operator-69f744f599-jb7bk\" (UID: \"57abb356-60a5-43ec-8ab0-07e2198a494d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881822 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-service-ca\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881840 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1948d5-d633-4a92-a800-776add7a0894-serving-cert\") pod \"controller-manager-879f6c89f-9sfxt\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881857 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9fqc\" (UniqueName: \"kubernetes.io/projected/430a7abd-f5ce-4886-b79a-436d715e3e1b-kube-api-access-h9fqc\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881879 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-client-ca\") pod \"controller-manager-879f6c89f-9sfxt\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881901 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1550d14-7d6b-43b9-bbbd-268b0274028a-serving-cert\") pod \"route-controller-manager-6576b87f9c-b4d2h\" (UID: \"b1550d14-7d6b-43b9-bbbd-268b0274028a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881918 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42kq8\" (UniqueName: \"kubernetes.io/projected/fef47a87-3f60-4ee1-a31e-b02583fc2819-kube-api-access-42kq8\") pod \"downloads-7954f5f757-zzwvq\" (UID: \"fef47a87-3f60-4ee1-a31e-b02583fc2819\") " pod="openshift-console/downloads-7954f5f757-zzwvq" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881937 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thfdv\" (UniqueName: \"kubernetes.io/projected/3d0acb73-5437-44f1-a83e-2a3781acce52-kube-api-access-thfdv\") pod \"openshift-config-operator-7777fb866f-xz9nm\" (UID: \"3d0acb73-5437-44f1-a83e-2a3781acce52\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881968 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkqk6\" (UniqueName: \"kubernetes.io/projected/a44a8ca4-92df-406f-8ee7-37da7a5f6d8b-kube-api-access-nkqk6\") pod \"machine-api-operator-5694c8668f-2qlj9\" (UID: \"a44a8ca4-92df-406f-8ee7-37da7a5f6d8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.881984 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3ba498c-9fbe-43ab-82ea-0330759be0fa-serving-cert\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.882005 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/902681dd-c0f3-4fda-8d56-c3fff7e3fcec-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gb9k\" (UID: \"902681dd-c0f3-4fda-8d56-c3fff7e3fcec\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.882030 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9sfxt\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.882025 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3ba498c-9fbe-43ab-82ea-0330759be0fa-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.882078 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1550d14-7d6b-43b9-bbbd-268b0274028a-client-ca\") pod \"route-controller-manager-6576b87f9c-b4d2h\" (UID: \"b1550d14-7d6b-43b9-bbbd-268b0274028a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.882103 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/430a7abd-f5ce-4886-b79a-436d715e3e1b-node-pullsecrets\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.882122 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxbcp\" (UniqueName: \"kubernetes.io/projected/974b621b-6635-4ca8-b53d-b15ae31b51b0-kube-api-access-lxbcp\") pod \"cluster-samples-operator-665b6dd947-wm72k\" (UID: \"974b621b-6635-4ca8-b53d-b15ae31b51b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.882139 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.882160 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c75wp\" (UniqueName: \"kubernetes.io/projected/38101ae8-9e21-4a62-b839-cc42e0562769-kube-api-access-c75wp\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.882197 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f3ba498c-9fbe-43ab-82ea-0330759be0fa-etcd-client\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.882218 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57abb356-60a5-43ec-8ab0-07e2198a494d-serving-cert\") pod \"authentication-operator-69f744f599-jb7bk\" (UID: \"57abb356-60a5-43ec-8ab0-07e2198a494d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.882242 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1550d14-7d6b-43b9-bbbd-268b0274028a-config\") pod \"route-controller-manager-6576b87f9c-b4d2h\" (UID: \"b1550d14-7d6b-43b9-bbbd-268b0274028a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.882652 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a44a8ca4-92df-406f-8ee7-37da7a5f6d8b-images\") pod \"machine-api-operator-5694c8668f-2qlj9\" (UID: \"a44a8ca4-92df-406f-8ee7-37da7a5f6d8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.883256 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3ba498c-9fbe-43ab-82ea-0330759be0fa-audit-policies\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.883497 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f3ba498c-9fbe-43ab-82ea-0330759be0fa-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.883583 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3ba498c-9fbe-43ab-82ea-0330759be0fa-audit-dir\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.883524 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a44a8ca4-92df-406f-8ee7-37da7a5f6d8b-config\") pod \"machine-api-operator-5694c8668f-2qlj9\" (UID: \"a44a8ca4-92df-406f-8ee7-37da7a5f6d8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.883722 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/430a7abd-f5ce-4886-b79a-436d715e3e1b-config\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.885050 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9sfxt\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.885365 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f3ba498c-9fbe-43ab-82ea-0330759be0fa-encryption-config\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.880890 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dr4nh"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.886348 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.886409 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/430a7abd-f5ce-4886-b79a-436d715e3e1b-etcd-serving-ca\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.886532 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/430a7abd-f5ce-4886-b79a-436d715e3e1b-node-pullsecrets\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.886579 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1550d14-7d6b-43b9-bbbd-268b0274028a-client-ca\") pod \"route-controller-manager-6576b87f9c-b4d2h\" (UID: \"b1550d14-7d6b-43b9-bbbd-268b0274028a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.886613 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/430a7abd-f5ce-4886-b79a-436d715e3e1b-serving-cert\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.886792 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.887064 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.887273 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-client-ca\") pod \"controller-manager-879f6c89f-9sfxt\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.889061 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-config\") pod \"controller-manager-879f6c89f-9sfxt\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.889174 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/430a7abd-f5ce-4886-b79a-436d715e3e1b-etcd-client\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.890959 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3ba498c-9fbe-43ab-82ea-0330759be0fa-serving-cert\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.893569 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qlttx"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.894117 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.894152 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.895313 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.895453 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.895923 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.896025 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.896076 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.896812 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.897019 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/430a7abd-f5ce-4886-b79a-436d715e3e1b-audit\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.897073 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.898089 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a44a8ca4-92df-406f-8ee7-37da7a5f6d8b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2qlj9\" (UID: \"a44a8ca4-92df-406f-8ee7-37da7a5f6d8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.898958 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.899240 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.900242 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.900666 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.900712 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.900822 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.901258 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.901502 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.901572 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.901504 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.901732 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.901825 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.901864 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.901882 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.901968 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.902008 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.901717 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f3ba498c-9fbe-43ab-82ea-0330759be0fa-etcd-client\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.902096 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.902199 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.902205 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.902357 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.902399 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.902487 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.902797 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.905597 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1550d14-7d6b-43b9-bbbd-268b0274028a-serving-cert\") pod \"route-controller-manager-6576b87f9c-b4d2h\" (UID: \"b1550d14-7d6b-43b9-bbbd-268b0274028a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.905901 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.905903 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.906151 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.913448 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.913709 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.915234 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/430a7abd-f5ce-4886-b79a-436d715e3e1b-encryption-config\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.917441 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.917962 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.922429 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9sfxt"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.922469 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wsncx"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.923056 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.923326 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6b4xb"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.923810 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.924055 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.924316 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.925165 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.925886 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5t5cn"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.925995 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6b4xb" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.926113 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsncx" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.927373 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.927644 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.927464 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.927778 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.928158 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5t5cn" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.928823 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bdcv5"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.930649 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1948d5-d633-4a92-a800-776add7a0894-serving-cert\") pod \"controller-manager-879f6c89f-9sfxt\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.930852 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.930699 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.930953 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.932219 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.932587 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4wkf5"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.933339 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.938268 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.945411 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4db5l\" (UniqueName: \"kubernetes.io/projected/f3ba498c-9fbe-43ab-82ea-0330759be0fa-kube-api-access-4db5l\") pod \"apiserver-7bbb656c7d-hw6d8\" (UID: \"f3ba498c-9fbe-43ab-82ea-0330759be0fa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.946881 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4wkf5" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.948591 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.949776 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.951581 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.958237 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.959069 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.960094 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.962354 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.966174 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.966692 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.970445 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.971753 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.972826 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.973481 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.973564 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.974055 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fwc2"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.974496 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fwc2" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.975191 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.976339 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.978037 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.979249 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xpp8n"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.981341 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.981371 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kl8pj"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.981384 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-chw9t"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.981492 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xpp8n" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.982427 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.983334 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjrt9\" (UniqueName: \"kubernetes.io/projected/9216c066-ab74-4299-b586-92eba3e4d36a-kube-api-access-pjrt9\") pod \"machine-approver-56656f9798-g4p7d\" (UID: \"9216c066-ab74-4299-b586-92eba3e4d36a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.983385 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57abb356-60a5-43ec-8ab0-07e2198a494d-config\") pod \"authentication-operator-69f744f599-jb7bk\" (UID: \"57abb356-60a5-43ec-8ab0-07e2198a494d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.983409 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99n7d\" (UniqueName: \"kubernetes.io/projected/2169100e-5122-411b-9cb1-4d1ae0ebbd86-kube-api-access-99n7d\") pod \"openshift-apiserver-operator-796bbdcf4f-6v64g\" (UID: \"2169100e-5122-411b-9cb1-4d1ae0ebbd86\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.983429 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.983446 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/902681dd-c0f3-4fda-8d56-c3fff7e3fcec-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gb9k\" (UID: \"902681dd-c0f3-4fda-8d56-c3fff7e3fcec\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.983464 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-console-config\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.983487 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-trusted-ca-bundle\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.983506 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g77f\" (UniqueName: \"kubernetes.io/projected/57abb356-60a5-43ec-8ab0-07e2198a494d-kube-api-access-7g77f\") pod \"authentication-operator-69f744f599-jb7bk\" (UID: \"57abb356-60a5-43ec-8ab0-07e2198a494d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.983524 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/38101ae8-9e21-4a62-b839-cc42e0562769-console-serving-cert\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.983582 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.983604 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64cfe70c-3f37-4f26-b699-d8229dba4508-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-b2wdn\" (UID: \"64cfe70c-3f37-4f26-b699-d8229dba4508\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984452 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3d0acb73-5437-44f1-a83e-2a3781acce52-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xz9nm\" (UID: \"3d0acb73-5437-44f1-a83e-2a3781acce52\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984487 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9216c066-ab74-4299-b586-92eba3e4d36a-auth-proxy-config\") pod \"machine-approver-56656f9798-g4p7d\" (UID: \"9216c066-ab74-4299-b586-92eba3e4d36a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984518 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9216c066-ab74-4299-b586-92eba3e4d36a-config\") pod \"machine-approver-56656f9798-g4p7d\" (UID: \"9216c066-ab74-4299-b586-92eba3e4d36a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984539 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984558 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984577 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984634 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57abb356-60a5-43ec-8ab0-07e2198a494d-service-ca-bundle\") pod \"authentication-operator-69f744f599-jb7bk\" (UID: \"57abb356-60a5-43ec-8ab0-07e2198a494d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984653 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-service-ca\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984695 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42kq8\" (UniqueName: \"kubernetes.io/projected/fef47a87-3f60-4ee1-a31e-b02583fc2819-kube-api-access-42kq8\") pod \"downloads-7954f5f757-zzwvq\" (UID: \"fef47a87-3f60-4ee1-a31e-b02583fc2819\") " pod="openshift-console/downloads-7954f5f757-zzwvq" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984715 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thfdv\" (UniqueName: \"kubernetes.io/projected/3d0acb73-5437-44f1-a83e-2a3781acce52-kube-api-access-thfdv\") pod \"openshift-config-operator-7777fb866f-xz9nm\" (UID: \"3d0acb73-5437-44f1-a83e-2a3781acce52\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984742 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8fgm\" (UniqueName: \"kubernetes.io/projected/daea8216-5097-43f5-913a-eda16abaf508-kube-api-access-q8fgm\") pod \"collect-profiles-29399715-h2fzj\" (UID: \"daea8216-5097-43f5-913a-eda16abaf508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984773 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/902681dd-c0f3-4fda-8d56-c3fff7e3fcec-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gb9k\" (UID: \"902681dd-c0f3-4fda-8d56-c3fff7e3fcec\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984796 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxbcp\" (UniqueName: \"kubernetes.io/projected/974b621b-6635-4ca8-b53d-b15ae31b51b0-kube-api-access-lxbcp\") pod \"cluster-samples-operator-665b6dd947-wm72k\" (UID: \"974b621b-6635-4ca8-b53d-b15ae31b51b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984814 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984834 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c75wp\" (UniqueName: \"kubernetes.io/projected/38101ae8-9e21-4a62-b839-cc42e0562769-kube-api-access-c75wp\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984854 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57abb356-60a5-43ec-8ab0-07e2198a494d-serving-cert\") pod \"authentication-operator-69f744f599-jb7bk\" (UID: \"57abb356-60a5-43ec-8ab0-07e2198a494d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984882 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984903 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984927 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64cfe70c-3f37-4f26-b699-d8229dba4508-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-b2wdn\" (UID: \"64cfe70c-3f37-4f26-b699-d8229dba4508\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984945 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twknq\" (UniqueName: \"kubernetes.io/projected/64cfe70c-3f37-4f26-b699-d8229dba4508-kube-api-access-twknq\") pod \"cluster-image-registry-operator-dc59b4c8b-b2wdn\" (UID: \"64cfe70c-3f37-4f26-b699-d8229dba4508\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984965 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/974b621b-6635-4ca8-b53d-b15ae31b51b0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wm72k\" (UID: \"974b621b-6635-4ca8-b53d-b15ae31b51b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.984982 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9216c066-ab74-4299-b586-92eba3e4d36a-machine-approver-tls\") pod \"machine-approver-56656f9798-g4p7d\" (UID: \"9216c066-ab74-4299-b586-92eba3e4d36a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.985003 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-955rz\" (UniqueName: \"kubernetes.io/projected/019dfbed-3859-4761-890e-cd8205747454-kube-api-access-955rz\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.985034 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2169100e-5122-411b-9cb1-4d1ae0ebbd86-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6v64g\" (UID: \"2169100e-5122-411b-9cb1-4d1ae0ebbd86\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.985054 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/daea8216-5097-43f5-913a-eda16abaf508-config-volume\") pod \"collect-profiles-29399715-h2fzj\" (UID: \"daea8216-5097-43f5-913a-eda16abaf508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.985077 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.985094 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/daea8216-5097-43f5-913a-eda16abaf508-secret-volume\") pod \"collect-profiles-29399715-h2fzj\" (UID: \"daea8216-5097-43f5-913a-eda16abaf508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.985114 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57abb356-60a5-43ec-8ab0-07e2198a494d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jb7bk\" (UID: \"57abb356-60a5-43ec-8ab0-07e2198a494d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.985133 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-audit-policies\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.985195 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0acb73-5437-44f1-a83e-2a3781acce52-serving-cert\") pod \"openshift-config-operator-7777fb866f-xz9nm\" (UID: \"3d0acb73-5437-44f1-a83e-2a3781acce52\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.985299 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/38101ae8-9e21-4a62-b839-cc42e0562769-console-oauth-config\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.985322 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-oauth-serving-cert\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.985343 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/64cfe70c-3f37-4f26-b699-d8229dba4508-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-b2wdn\" (UID: \"64cfe70c-3f37-4f26-b699-d8229dba4508\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.985363 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/019dfbed-3859-4761-890e-cd8205747454-audit-dir\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.985381 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.985400 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.985421 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mknqk\" (UniqueName: \"kubernetes.io/projected/902681dd-c0f3-4fda-8d56-c3fff7e3fcec-kube-api-access-mknqk\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gb9k\" (UID: \"902681dd-c0f3-4fda-8d56-c3fff7e3fcec\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.985442 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2169100e-5122-411b-9cb1-4d1ae0ebbd86-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6v64g\" (UID: \"2169100e-5122-411b-9cb1-4d1ae0ebbd86\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.986187 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zzwvq"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.986231 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.986714 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f8b8t"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.986724 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/902681dd-c0f3-4fda-8d56-c3fff7e3fcec-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gb9k\" (UID: \"902681dd-c0f3-4fda-8d56-c3fff7e3fcec\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.987945 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57abb356-60a5-43ec-8ab0-07e2198a494d-config\") pod \"authentication-operator-69f744f599-jb7bk\" (UID: \"57abb356-60a5-43ec-8ab0-07e2198a494d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.988281 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.989042 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-console-config\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.989409 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jb7bk"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.990477 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3d0acb73-5437-44f1-a83e-2a3781acce52-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xz9nm\" (UID: \"3d0acb73-5437-44f1-a83e-2a3781acce52\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.991486 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.991639 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-trusted-ca-bundle\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.992193 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9216c066-ab74-4299-b586-92eba3e4d36a-auth-proxy-config\") pod \"machine-approver-56656f9798-g4p7d\" (UID: \"9216c066-ab74-4299-b586-92eba3e4d36a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.992805 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9216c066-ab74-4299-b586-92eba3e4d36a-config\") pod \"machine-approver-56656f9798-g4p7d\" (UID: \"9216c066-ab74-4299-b586-92eba3e4d36a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.993330 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57abb356-60a5-43ec-8ab0-07e2198a494d-service-ca-bundle\") pod \"authentication-operator-69f744f599-jb7bk\" (UID: \"57abb356-60a5-43ec-8ab0-07e2198a494d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.993850 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0acb73-5437-44f1-a83e-2a3781acce52-serving-cert\") pod \"openshift-config-operator-7777fb866f-xz9nm\" (UID: \"3d0acb73-5437-44f1-a83e-2a3781acce52\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.994060 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57abb356-60a5-43ec-8ab0-07e2198a494d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jb7bk\" (UID: \"57abb356-60a5-43ec-8ab0-07e2198a494d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.994153 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/019dfbed-3859-4761-890e-cd8205747454-audit-dir\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.994248 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/64cfe70c-3f37-4f26-b699-d8229dba4508-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-b2wdn\" (UID: \"64cfe70c-3f37-4f26-b699-d8229dba4508\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.994522 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-oauth-serving-cert\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.994561 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2169100e-5122-411b-9cb1-4d1ae0ebbd86-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6v64g\" (UID: \"2169100e-5122-411b-9cb1-4d1ae0ebbd86\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.994973 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/902681dd-c0f3-4fda-8d56-c3fff7e3fcec-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gb9k\" (UID: \"902681dd-c0f3-4fda-8d56-c3fff7e3fcec\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.995242 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-service-ca\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.995597 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57abb356-60a5-43ec-8ab0-07e2198a494d-serving-cert\") pod \"authentication-operator-69f744f599-jb7bk\" (UID: \"57abb356-60a5-43ec-8ab0-07e2198a494d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.995752 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k"] Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.996458 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2169100e-5122-411b-9cb1-4d1ae0ebbd86-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6v64g\" (UID: \"2169100e-5122-411b-9cb1-4d1ae0ebbd86\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.995800 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.996774 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.996865 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64cfe70c-3f37-4f26-b699-d8229dba4508-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-b2wdn\" (UID: \"64cfe70c-3f37-4f26-b699-d8229dba4508\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.997247 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.997267 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.997776 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-audit-policies\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.997911 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/38101ae8-9e21-4a62-b839-cc42e0562769-console-serving-cert\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.999219 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.999715 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/38101ae8-9e21-4a62-b839-cc42e0562769-console-oauth-config\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.999725 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:49 crc kubenswrapper[4678]: I1124 11:18:49.999903 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.000407 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.000828 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.000928 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.001206 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.001551 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rkrb2"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.002780 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.005840 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2qlj9"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.007453 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.008584 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/974b621b-6635-4ca8-b53d-b15ae31b51b0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wm72k\" (UID: \"974b621b-6635-4ca8-b53d-b15ae31b51b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.008909 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.009178 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9216c066-ab74-4299-b586-92eba3e4d36a-machine-approver-tls\") pod \"machine-approver-56656f9798-g4p7d\" (UID: \"9216c066-ab74-4299-b586-92eba3e4d36a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.010350 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.011862 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wsncx"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.012325 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.014193 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tf9mj"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.014571 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vcwcn"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.016779 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dr4nh"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.017257 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-lc4nq"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.018044 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lc4nq" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.018743 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ftpl8"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.021646 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.022420 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bdcv5"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.024491 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.026445 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6b4xb"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.028663 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5t5cn"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.030603 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.031159 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.031981 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.033025 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.040892 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.045397 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.048977 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ftpl8"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.052734 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.057573 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.057836 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4wkf5"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.059086 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xpp8n"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.060424 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fwc2"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.061485 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-q2r4x"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.062575 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q2r4x"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.062693 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q2r4x" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.072332 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.084888 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.086307 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8fgm\" (UniqueName: \"kubernetes.io/projected/daea8216-5097-43f5-913a-eda16abaf508-kube-api-access-q8fgm\") pod \"collect-profiles-29399715-h2fzj\" (UID: \"daea8216-5097-43f5-913a-eda16abaf508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.086412 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/daea8216-5097-43f5-913a-eda16abaf508-config-volume\") pod \"collect-profiles-29399715-h2fzj\" (UID: \"daea8216-5097-43f5-913a-eda16abaf508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.086440 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/daea8216-5097-43f5-913a-eda16abaf508-secret-volume\") pod \"collect-profiles-29399715-h2fzj\" (UID: \"daea8216-5097-43f5-913a-eda16abaf508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.109469 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bdkj\" (UniqueName: \"kubernetes.io/projected/b1550d14-7d6b-43b9-bbbd-268b0274028a-kube-api-access-4bdkj\") pod \"route-controller-manager-6576b87f9c-b4d2h\" (UID: \"b1550d14-7d6b-43b9-bbbd-268b0274028a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.131196 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkqk6\" (UniqueName: \"kubernetes.io/projected/a44a8ca4-92df-406f-8ee7-37da7a5f6d8b-kube-api-access-nkqk6\") pod \"machine-api-operator-5694c8668f-2qlj9\" (UID: \"a44a8ca4-92df-406f-8ee7-37da7a5f6d8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.151628 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsxtw\" (UniqueName: \"kubernetes.io/projected/dd1948d5-d633-4a92-a800-776add7a0894-kube-api-access-fsxtw\") pod \"controller-manager-879f6c89f-9sfxt\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.151954 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.172426 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.195248 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.213214 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.224327 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/daea8216-5097-43f5-913a-eda16abaf508-secret-volume\") pod \"collect-profiles-29399715-h2fzj\" (UID: \"daea8216-5097-43f5-913a-eda16abaf508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.228307 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.233652 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.237790 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/daea8216-5097-43f5-913a-eda16abaf508-config-volume\") pod \"collect-profiles-29399715-h2fzj\" (UID: \"daea8216-5097-43f5-913a-eda16abaf508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.253421 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.272065 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.277795 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.292386 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.312402 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.344871 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.346335 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9fqc\" (UniqueName: \"kubernetes.io/projected/430a7abd-f5ce-4886-b79a-436d715e3e1b-kube-api-access-h9fqc\") pod \"apiserver-76f77b778f-kl8pj\" (UID: \"430a7abd-f5ce-4886-b79a-436d715e3e1b\") " pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.352950 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.373502 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.382575 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h"] Nov 24 11:18:50 crc kubenswrapper[4678]: W1124 11:18:50.390028 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1550d14_7d6b_43b9_bbbd_268b0274028a.slice/crio-4d7c3e527d8069963bb9362d79abad9e3013d4fcb8c5de75c228d944d12c794e WatchSource:0}: Error finding container 4d7c3e527d8069963bb9362d79abad9e3013d4fcb8c5de75c228d944d12c794e: Status 404 returned error can't find the container with id 4d7c3e527d8069963bb9362d79abad9e3013d4fcb8c5de75c228d944d12c794e Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.391946 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.412379 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.427389 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.475166 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.493725 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.512077 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.515974 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.520784 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9sfxt"] Nov 24 11:18:50 crc kubenswrapper[4678]: W1124 11:18:50.532249 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd1948d5_d633_4a92_a800_776add7a0894.slice/crio-18fb14b53b252f397dca48ded7ef0cb718bc5236b24f5a9dde7f4602e8f5f6dd WatchSource:0}: Error finding container 18fb14b53b252f397dca48ded7ef0cb718bc5236b24f5a9dde7f4602e8f5f6dd: Status 404 returned error can't find the container with id 18fb14b53b252f397dca48ded7ef0cb718bc5236b24f5a9dde7f4602e8f5f6dd Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.532523 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.552921 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.579128 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.591892 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.613352 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.623541 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2qlj9"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.632090 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 24 11:18:50 crc kubenswrapper[4678]: W1124 11:18:50.645355 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda44a8ca4_92df_406f_8ee7_37da7a5f6d8b.slice/crio-2954f0b2ff15bf6612023886bad7f941c8ea2dec69189000ac673df58cf6bd8d WatchSource:0}: Error finding container 2954f0b2ff15bf6612023886bad7f941c8ea2dec69189000ac673df58cf6bd8d: Status 404 returned error can't find the container with id 2954f0b2ff15bf6612023886bad7f941c8ea2dec69189000ac673df58cf6bd8d Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.653151 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.672135 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.692169 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.696253 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" event={"ID":"a44a8ca4-92df-406f-8ee7-37da7a5f6d8b","Type":"ContainerStarted","Data":"2954f0b2ff15bf6612023886bad7f941c8ea2dec69189000ac673df58cf6bd8d"} Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.697849 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kl8pj"] Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.700731 4678 generic.go:334] "Generic (PLEG): container finished" podID="f3ba498c-9fbe-43ab-82ea-0330759be0fa" containerID="5e65127d6f33105f8f1e4f268eb061e2f47ed4717938e8a42c634f1147b44ce0" exitCode=0 Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.701026 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" event={"ID":"f3ba498c-9fbe-43ab-82ea-0330759be0fa","Type":"ContainerDied","Data":"5e65127d6f33105f8f1e4f268eb061e2f47ed4717938e8a42c634f1147b44ce0"} Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.701093 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" event={"ID":"f3ba498c-9fbe-43ab-82ea-0330759be0fa","Type":"ContainerStarted","Data":"6a4b213e4faaddb62618109c35c27aeb4bd61e046685e895ff41c4afbdfe13e9"} Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.702296 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" event={"ID":"dd1948d5-d633-4a92-a800-776add7a0894","Type":"ContainerStarted","Data":"518bbfc59ceb7601c55c1078931afc8f91780d6822b520315ae5f34489a9c673"} Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.702331 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" event={"ID":"dd1948d5-d633-4a92-a800-776add7a0894","Type":"ContainerStarted","Data":"18fb14b53b252f397dca48ded7ef0cb718bc5236b24f5a9dde7f4602e8f5f6dd"} Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.702516 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.703377 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" event={"ID":"b1550d14-7d6b-43b9-bbbd-268b0274028a","Type":"ContainerStarted","Data":"19403cd2fe755a390f8dc144980b0e1b3d5ff8d2ed6ea2ed4f32f56f58716992"} Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.703404 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" event={"ID":"b1550d14-7d6b-43b9-bbbd-268b0274028a","Type":"ContainerStarted","Data":"4d7c3e527d8069963bb9362d79abad9e3013d4fcb8c5de75c228d944d12c794e"} Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.703922 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.706200 4678 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-b4d2h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.706200 4678 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9sfxt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.706243 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" podUID="b1550d14-7d6b-43b9-bbbd-268b0274028a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.706309 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" podUID="dd1948d5-d633-4a92-a800-776add7a0894" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.712145 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.731845 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.751538 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.777208 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.793633 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.812821 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.832862 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.852856 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.873472 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.891822 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.915066 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.929781 4678 request.go:700] Waited for 1.001792955s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&limit=500&resourceVersion=0 Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.931851 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.954640 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.976248 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 24 11:18:50 crc kubenswrapper[4678]: I1124 11:18:50.993943 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.011941 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.033092 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.051509 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.071957 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.094032 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.111091 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.132223 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.155052 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.178964 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.192190 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.211819 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.232443 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.251569 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.273039 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.292058 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.312394 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.332541 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.352841 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.372990 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.394083 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.412216 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.432913 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.452152 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.472931 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.493120 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.512448 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.532112 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.564803 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.572902 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.593102 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.613032 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.632306 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.672537 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99n7d\" (UniqueName: \"kubernetes.io/projected/2169100e-5122-411b-9cb1-4d1ae0ebbd86-kube-api-access-99n7d\") pod \"openshift-apiserver-operator-796bbdcf4f-6v64g\" (UID: \"2169100e-5122-411b-9cb1-4d1ae0ebbd86\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.695416 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjrt9\" (UniqueName: \"kubernetes.io/projected/9216c066-ab74-4299-b586-92eba3e4d36a-kube-api-access-pjrt9\") pod \"machine-approver-56656f9798-g4p7d\" (UID: \"9216c066-ab74-4299-b586-92eba3e4d36a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.708272 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thfdv\" (UniqueName: \"kubernetes.io/projected/3d0acb73-5437-44f1-a83e-2a3781acce52-kube-api-access-thfdv\") pod \"openshift-config-operator-7777fb866f-xz9nm\" (UID: \"3d0acb73-5437-44f1-a83e-2a3781acce52\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.710906 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" event={"ID":"a44a8ca4-92df-406f-8ee7-37da7a5f6d8b","Type":"ContainerStarted","Data":"411e2d4d38f150f894888b98c30f6860738b435a68ba90cda59fcd321eeaf37f"} Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.710988 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" event={"ID":"a44a8ca4-92df-406f-8ee7-37da7a5f6d8b","Type":"ContainerStarted","Data":"02a6771b04cd3cc531d4709e05ebed297539606e0f23566c6737424f0282e786"} Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.715959 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" event={"ID":"f3ba498c-9fbe-43ab-82ea-0330759be0fa","Type":"ContainerStarted","Data":"49c71440e353fcd1bd525952a34336eb81e74368d9e044d0ed152fdbb9ca16c8"} Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.719101 4678 generic.go:334] "Generic (PLEG): container finished" podID="430a7abd-f5ce-4886-b79a-436d715e3e1b" containerID="272daaced68e6b5f2d989a1bb528e374485a77de6b6d3ebd5e45d25738d46f40" exitCode=0 Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.720427 4678 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9sfxt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.720505 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" podUID="dd1948d5-d633-4a92-a800-776add7a0894" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.722599 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" event={"ID":"430a7abd-f5ce-4886-b79a-436d715e3e1b","Type":"ContainerDied","Data":"272daaced68e6b5f2d989a1bb528e374485a77de6b6d3ebd5e45d25738d46f40"} Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.722652 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" event={"ID":"430a7abd-f5ce-4886-b79a-436d715e3e1b","Type":"ContainerStarted","Data":"4c5994c6faeac461f8193142662f512af452cfb6dac8d72b58818b69184ecab3"} Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.726587 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxbcp\" (UniqueName: \"kubernetes.io/projected/974b621b-6635-4ca8-b53d-b15ae31b51b0-kube-api-access-lxbcp\") pod \"cluster-samples-operator-665b6dd947-wm72k\" (UID: \"974b621b-6635-4ca8-b53d-b15ae31b51b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.732164 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.750588 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c75wp\" (UniqueName: \"kubernetes.io/projected/38101ae8-9e21-4a62-b839-cc42e0562769-kube-api-access-c75wp\") pod \"console-f9d7485db-chw9t\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.765886 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.766450 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g77f\" (UniqueName: \"kubernetes.io/projected/57abb356-60a5-43ec-8ab0-07e2198a494d-kube-api-access-7g77f\") pod \"authentication-operator-69f744f599-jb7bk\" (UID: \"57abb356-60a5-43ec-8ab0-07e2198a494d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.775367 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.783007 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.787062 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42kq8\" (UniqueName: \"kubernetes.io/projected/fef47a87-3f60-4ee1-a31e-b02583fc2819-kube-api-access-42kq8\") pod \"downloads-7954f5f757-zzwvq\" (UID: \"fef47a87-3f60-4ee1-a31e-b02583fc2819\") " pod="openshift-console/downloads-7954f5f757-zzwvq" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.800355 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.807357 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.814620 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64cfe70c-3f37-4f26-b699-d8229dba4508-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-b2wdn\" (UID: \"64cfe70c-3f37-4f26-b699-d8229dba4508\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.817464 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zzwvq" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.834817 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.838876 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twknq\" (UniqueName: \"kubernetes.io/projected/64cfe70c-3f37-4f26-b699-d8229dba4508-kube-api-access-twknq\") pod \"cluster-image-registry-operator-dc59b4c8b-b2wdn\" (UID: \"64cfe70c-3f37-4f26-b699-d8229dba4508\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.855358 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mknqk\" (UniqueName: \"kubernetes.io/projected/902681dd-c0f3-4fda-8d56-c3fff7e3fcec-kube-api-access-mknqk\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gb9k\" (UID: \"902681dd-c0f3-4fda-8d56-c3fff7e3fcec\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k" Nov 24 11:18:51 crc kubenswrapper[4678]: W1124 11:18:51.865336 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9216c066_ab74_4299_b586_92eba3e4d36a.slice/crio-fb03c86695b6e11bbfd6a51be383fdd42b785c31d59646df73f86e477dcb6ae1 WatchSource:0}: Error finding container fb03c86695b6e11bbfd6a51be383fdd42b785c31d59646df73f86e477dcb6ae1: Status 404 returned error can't find the container with id fb03c86695b6e11bbfd6a51be383fdd42b785c31d59646df73f86e477dcb6ae1 Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.871635 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-955rz\" (UniqueName: \"kubernetes.io/projected/019dfbed-3859-4761-890e-cd8205747454-kube-api-access-955rz\") pod \"oauth-openshift-558db77b4-tf9mj\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.874914 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.895580 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.915270 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.930260 4678 request.go:700] Waited for 1.906302461s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.932126 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.946286 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.952559 4678 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.973503 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 24 11:18:51 crc kubenswrapper[4678]: I1124 11:18:51.992363 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.014887 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.038962 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.052601 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.054568 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.058203 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g"] Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.092338 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm"] Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.097105 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.103608 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8fgm\" (UniqueName: \"kubernetes.io/projected/daea8216-5097-43f5-913a-eda16abaf508-kube-api-access-q8fgm\") pod \"collect-profiles-29399715-h2fzj\" (UID: \"daea8216-5097-43f5-913a-eda16abaf508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.117604 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c1ade65-11e8-4529-9885-7630968a4b98-registry-certificates\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.117643 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c54546d6-cb67-47c3-97dc-36d0433d6066-config\") pod \"console-operator-58897d9998-rkrb2\" (UID: \"c54546d6-cb67-47c3-97dc-36d0433d6066\") " pod="openshift-console-operator/console-operator-58897d9998-rkrb2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.117683 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/73483387-af92-4cdd-872f-3bf8e62032b1-metrics-tls\") pod \"dns-operator-744455d44c-f8b8t\" (UID: \"73483387-af92-4cdd-872f-3bf8e62032b1\") " pod="openshift-dns-operator/dns-operator-744455d44c-f8b8t" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.117768 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7989598a-648e-4c88-aeed-1a54f14f8eab-etcd-service-ca\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.117811 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.117828 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c1ade65-11e8-4529-9885-7630968a4b98-trusted-ca\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.117860 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-registry-tls\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.117881 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzntd\" (UniqueName: \"kubernetes.io/projected/c54546d6-cb67-47c3-97dc-36d0433d6066-kube-api-access-vzntd\") pod \"console-operator-58897d9998-rkrb2\" (UID: \"c54546d6-cb67-47c3-97dc-36d0433d6066\") " pod="openshift-console-operator/console-operator-58897d9998-rkrb2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.117982 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c1ade65-11e8-4529-9885-7630968a4b98-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.118024 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c54546d6-cb67-47c3-97dc-36d0433d6066-trusted-ca\") pod \"console-operator-58897d9998-rkrb2\" (UID: \"c54546d6-cb67-47c3-97dc-36d0433d6066\") " pod="openshift-console-operator/console-operator-58897d9998-rkrb2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.118065 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7989598a-648e-4c88-aeed-1a54f14f8eab-config\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.118082 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d72dd0f-a43c-4ee8-8a71-656141506c59-metrics-tls\") pod \"ingress-operator-5b745b69d9-9r9fl\" (UID: \"7d72dd0f-a43c-4ee8-8a71-656141506c59\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.118110 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7989598a-648e-4c88-aeed-1a54f14f8eab-serving-cert\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.118126 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d72dd0f-a43c-4ee8-8a71-656141506c59-trusted-ca\") pod \"ingress-operator-5b745b69d9-9r9fl\" (UID: \"7d72dd0f-a43c-4ee8-8a71-656141506c59\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.118142 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d72dd0f-a43c-4ee8-8a71-656141506c59-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9r9fl\" (UID: \"7d72dd0f-a43c-4ee8-8a71-656141506c59\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.118170 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bqrc\" (UniqueName: \"kubernetes.io/projected/7d72dd0f-a43c-4ee8-8a71-656141506c59-kube-api-access-8bqrc\") pod \"ingress-operator-5b745b69d9-9r9fl\" (UID: \"7d72dd0f-a43c-4ee8-8a71-656141506c59\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.118196 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl79j\" (UniqueName: \"kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-kube-api-access-rl79j\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.118223 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-bound-sa-token\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.118240 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b928p\" (UniqueName: \"kubernetes.io/projected/7989598a-648e-4c88-aeed-1a54f14f8eab-kube-api-access-b928p\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.118257 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7989598a-648e-4c88-aeed-1a54f14f8eab-etcd-ca\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.118315 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c1ade65-11e8-4529-9885-7630968a4b98-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.118364 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7989598a-648e-4c88-aeed-1a54f14f8eab-etcd-client\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.118383 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f845\" (UniqueName: \"kubernetes.io/projected/73483387-af92-4cdd-872f-3bf8e62032b1-kube-api-access-2f845\") pod \"dns-operator-744455d44c-f8b8t\" (UID: \"73483387-af92-4cdd-872f-3bf8e62032b1\") " pod="openshift-dns-operator/dns-operator-744455d44c-f8b8t" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.118489 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c54546d6-cb67-47c3-97dc-36d0433d6066-serving-cert\") pod \"console-operator-58897d9998-rkrb2\" (UID: \"c54546d6-cb67-47c3-97dc-36d0433d6066\") " pod="openshift-console-operator/console-operator-58897d9998-rkrb2" Nov 24 11:18:52 crc kubenswrapper[4678]: E1124 11:18:52.121623 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:52.621605926 +0000 UTC m=+143.552665555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:52 crc kubenswrapper[4678]: W1124 11:18:52.135083 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2169100e_5122_411b_9cb1_4d1ae0ebbd86.slice/crio-f500d7bdf02f0db3d69fc9625b4c5f8195eeeb8fad994ae11800c75f3e08dabc WatchSource:0}: Error finding container f500d7bdf02f0db3d69fc9625b4c5f8195eeeb8fad994ae11800c75f3e08dabc: Status 404 returned error can't find the container with id f500d7bdf02f0db3d69fc9625b4c5f8195eeeb8fad994ae11800c75f3e08dabc Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.169050 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.219640 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:52 crc kubenswrapper[4678]: E1124 11:18:52.219880 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:52.719838695 +0000 UTC m=+143.650898334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.219934 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw4sh\" (UniqueName: \"kubernetes.io/projected/16c36416-1b0e-493e-b349-3dbd7c007e29-kube-api-access-vw4sh\") pod \"router-default-5444994796-qlttx\" (UID: \"16c36416-1b0e-493e-b349-3dbd7c007e29\") " pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.219985 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c54546d6-cb67-47c3-97dc-36d0433d6066-trusted-ca\") pod \"console-operator-58897d9998-rkrb2\" (UID: \"c54546d6-cb67-47c3-97dc-36d0433d6066\") " pod="openshift-console-operator/console-operator-58897d9998-rkrb2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220005 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/03e0e923-647d-4e57-975a-d4d3e2c22cb5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9k4bm\" (UID: \"03e0e923-647d-4e57-975a-d4d3e2c22cb5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220026 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea-signing-cabundle\") pod \"service-ca-9c57cc56f-6b4xb\" (UID: \"c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-6b4xb" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220055 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fb63fa7c-3843-434c-97f9-4563b81f1b0d-metrics-tls\") pod \"dns-default-xpp8n\" (UID: \"fb63fa7c-3843-434c-97f9-4563b81f1b0d\") " pod="openshift-dns/dns-default-xpp8n" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220071 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/39caea0d-552b-4862-a9fd-0c82865ba675-srv-cert\") pod \"catalog-operator-68c6474976-76dnj\" (UID: \"39caea0d-552b-4862-a9fd-0c82865ba675\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220095 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7989598a-648e-4c88-aeed-1a54f14f8eab-config\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220115 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d72dd0f-a43c-4ee8-8a71-656141506c59-trusted-ca\") pod \"ingress-operator-5b745b69d9-9r9fl\" (UID: \"7d72dd0f-a43c-4ee8-8a71-656141506c59\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220131 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d72dd0f-a43c-4ee8-8a71-656141506c59-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9r9fl\" (UID: \"7d72dd0f-a43c-4ee8-8a71-656141506c59\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220149 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/16c36416-1b0e-493e-b349-3dbd7c007e29-stats-auth\") pod \"router-default-5444994796-qlttx\" (UID: \"16c36416-1b0e-493e-b349-3dbd7c007e29\") " pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220169 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl79j\" (UniqueName: \"kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-kube-api-access-rl79j\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220190 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bxsv\" (UniqueName: \"kubernetes.io/projected/8fa739d5-80cb-4afd-9ab9-850bf4a796d4-kube-api-access-4bxsv\") pod \"machine-config-operator-74547568cd-r5wl7\" (UID: \"8fa739d5-80cb-4afd-9ab9-850bf4a796d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220207 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wt9n\" (UniqueName: \"kubernetes.io/projected/c488212d-33c5-4863-b35f-a7764a62ccfb-kube-api-access-6wt9n\") pod \"package-server-manager-789f6589d5-mgcsk\" (UID: \"c488212d-33c5-4863-b35f-a7764a62ccfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220226 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfshk\" (UniqueName: \"kubernetes.io/projected/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-kube-api-access-mfshk\") pod \"marketplace-operator-79b997595-bdcv5\" (UID: \"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220246 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7989598a-648e-4c88-aeed-1a54f14f8eab-etcd-ca\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220266 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh92g\" (UniqueName: \"kubernetes.io/projected/be9c5230-fd8a-4d57-8cd7-2be1987a9aad-kube-api-access-hh92g\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjcrp\" (UID: \"be9c5230-fd8a-4d57-8cd7-2be1987a9aad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220307 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7989598a-648e-4c88-aeed-1a54f14f8eab-etcd-client\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220348 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eec10a91-53ab-46cf-917c-5bbc191c0e68-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z8r6h\" (UID: \"eec10a91-53ab-46cf-917c-5bbc191c0e68\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220365 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6hvf\" (UniqueName: \"kubernetes.io/projected/fb63fa7c-3843-434c-97f9-4563b81f1b0d-kube-api-access-w6hvf\") pod \"dns-default-xpp8n\" (UID: \"fb63fa7c-3843-434c-97f9-4563b81f1b0d\") " pod="openshift-dns/dns-default-xpp8n" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220385 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx5cl\" (UniqueName: \"kubernetes.io/projected/ac428049-1481-4d93-acbc-d18a1b81b60c-kube-api-access-cx5cl\") pod \"migrator-59844c95c7-5t5cn\" (UID: \"ac428049-1481-4d93-acbc-d18a1b81b60c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5t5cn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220415 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c54546d6-cb67-47c3-97dc-36d0433d6066-serving-cert\") pod \"console-operator-58897d9998-rkrb2\" (UID: \"c54546d6-cb67-47c3-97dc-36d0433d6066\") " pod="openshift-console-operator/console-operator-58897d9998-rkrb2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220436 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eec10a91-53ab-46cf-917c-5bbc191c0e68-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z8r6h\" (UID: \"eec10a91-53ab-46cf-917c-5bbc191c0e68\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220453 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ddc381d-aa5f-48f3-af7d-71987e847670-serving-cert\") pod \"service-ca-operator-777779d784-wsncx\" (UID: \"0ddc381d-aa5f-48f3-af7d-71987e847670\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsncx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220471 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e0a20b0-a531-4f04-9cdd-d62131c816ed-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-84jsm\" (UID: \"6e0a20b0-a531-4f04-9cdd-d62131c816ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220509 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c54546d6-cb67-47c3-97dc-36d0433d6066-config\") pod \"console-operator-58897d9998-rkrb2\" (UID: \"c54546d6-cb67-47c3-97dc-36d0433d6066\") " pod="openshift-console-operator/console-operator-58897d9998-rkrb2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220531 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwrr2\" (UniqueName: \"kubernetes.io/projected/80ecc549-e277-418f-bf45-873acf3b8794-kube-api-access-dwrr2\") pod \"multus-admission-controller-857f4d67dd-4wkf5\" (UID: \"80ecc549-e277-418f-bf45-873acf3b8794\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4wkf5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220550 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4f36b66c-a595-4427-b08a-508b9bf5a27b-csi-data-dir\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220584 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea-signing-key\") pod \"service-ca-9c57cc56f-6b4xb\" (UID: \"c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-6b4xb" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220601 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0a20b0-a531-4f04-9cdd-d62131c816ed-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-84jsm\" (UID: \"6e0a20b0-a531-4f04-9cdd-d62131c816ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220620 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/81eb78f6-e7d7-4f21-b8a0-28c6f0275897-tmpfs\") pod \"packageserver-d55dfcdfc-g8vp2\" (UID: \"81eb78f6-e7d7-4f21-b8a0-28c6f0275897\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220639 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec10a91-53ab-46cf-917c-5bbc191c0e68-config\") pod \"kube-controller-manager-operator-78b949d7b-z8r6h\" (UID: \"eec10a91-53ab-46cf-917c-5bbc191c0e68\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220695 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tg5d\" (UniqueName: \"kubernetes.io/projected/45a91a43-cc29-4d11-b78b-27f24c8f89a1-kube-api-access-6tg5d\") pod \"control-plane-machine-set-operator-78cbb6b69f-5fwc2\" (UID: \"45a91a43-cc29-4d11-b78b-27f24c8f89a1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fwc2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220712 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38fca0ce-fe47-4830-9403-4148d1195b66-certs\") pod \"machine-config-server-lc4nq\" (UID: \"38fca0ce-fe47-4830-9403-4148d1195b66\") " pod="openshift-machine-config-operator/machine-config-server-lc4nq" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220728 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8fa739d5-80cb-4afd-9ab9-850bf4a796d4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r5wl7\" (UID: \"8fa739d5-80cb-4afd-9ab9-850bf4a796d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220762 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-registry-tls\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220788 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8fa739d5-80cb-4afd-9ab9-850bf4a796d4-images\") pod \"machine-config-operator-74547568cd-r5wl7\" (UID: \"8fa739d5-80cb-4afd-9ab9-850bf4a796d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220825 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg7x4\" (UniqueName: \"kubernetes.io/projected/4f36b66c-a595-4427-b08a-508b9bf5a27b-kube-api-access-sg7x4\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220855 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c488212d-33c5-4863-b35f-a7764a62ccfb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mgcsk\" (UID: \"c488212d-33c5-4863-b35f-a7764a62ccfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220876 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b98jf\" (UniqueName: \"kubernetes.io/projected/39caea0d-552b-4862-a9fd-0c82865ba675-kube-api-access-b98jf\") pod \"catalog-operator-68c6474976-76dnj\" (UID: \"39caea0d-552b-4862-a9fd-0c82865ba675\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220900 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c1ade65-11e8-4529-9885-7630968a4b98-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220934 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16c36416-1b0e-493e-b349-3dbd7c007e29-service-ca-bundle\") pod \"router-default-5444994796-qlttx\" (UID: \"16c36416-1b0e-493e-b349-3dbd7c007e29\") " pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220962 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/81eb78f6-e7d7-4f21-b8a0-28c6f0275897-apiservice-cert\") pod \"packageserver-d55dfcdfc-g8vp2\" (UID: \"81eb78f6-e7d7-4f21-b8a0-28c6f0275897\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.220988 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d72dd0f-a43c-4ee8-8a71-656141506c59-metrics-tls\") pod \"ingress-operator-5b745b69d9-9r9fl\" (UID: \"7d72dd0f-a43c-4ee8-8a71-656141506c59\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221003 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4f36b66c-a595-4427-b08a-508b9bf5a27b-socket-dir\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221021 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7989598a-648e-4c88-aeed-1a54f14f8eab-serving-cert\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221037 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/39caea0d-552b-4862-a9fd-0c82865ba675-profile-collector-cert\") pod \"catalog-operator-68c6474976-76dnj\" (UID: \"39caea0d-552b-4862-a9fd-0c82865ba675\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221053 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8fa739d5-80cb-4afd-9ab9-850bf4a796d4-proxy-tls\") pod \"machine-config-operator-74547568cd-r5wl7\" (UID: \"8fa739d5-80cb-4afd-9ab9-850bf4a796d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221084 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bqrc\" (UniqueName: \"kubernetes.io/projected/7d72dd0f-a43c-4ee8-8a71-656141506c59-kube-api-access-8bqrc\") pod \"ingress-operator-5b745b69d9-9r9fl\" (UID: \"7d72dd0f-a43c-4ee8-8a71-656141506c59\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221140 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ddc381d-aa5f-48f3-af7d-71987e847670-config\") pod \"service-ca-operator-777779d784-wsncx\" (UID: \"0ddc381d-aa5f-48f3-af7d-71987e847670\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsncx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221159 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bc59508e-6c7e-4810-97db-e651e8f021ba-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dfcjk\" (UID: \"bc59508e-6c7e-4810-97db-e651e8f021ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221187 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-bound-sa-token\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221207 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b928p\" (UniqueName: \"kubernetes.io/projected/7989598a-648e-4c88-aeed-1a54f14f8eab-kube-api-access-b928p\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221227 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4f36b66c-a595-4427-b08a-508b9bf5a27b-mountpoint-dir\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221258 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4f36b66c-a595-4427-b08a-508b9bf5a27b-plugins-dir\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221279 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c1ade65-11e8-4529-9885-7630968a4b98-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221296 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rcwz\" (UniqueName: \"kubernetes.io/projected/c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea-kube-api-access-7rcwz\") pod \"service-ca-9c57cc56f-6b4xb\" (UID: \"c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-6b4xb" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221315 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cba64ea-58ae-4563-a1a5-0958891339e5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8d997\" (UID: \"4cba64ea-58ae-4563-a1a5-0958891339e5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221332 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bc59508e-6c7e-4810-97db-e651e8f021ba-srv-cert\") pod \"olm-operator-6b444d44fb-dfcjk\" (UID: \"bc59508e-6c7e-4810-97db-e651e8f021ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221353 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f845\" (UniqueName: \"kubernetes.io/projected/73483387-af92-4cdd-872f-3bf8e62032b1-kube-api-access-2f845\") pod \"dns-operator-744455d44c-f8b8t\" (UID: \"73483387-af92-4cdd-872f-3bf8e62032b1\") " pod="openshift-dns-operator/dns-operator-744455d44c-f8b8t" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221391 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81eb78f6-e7d7-4f21-b8a0-28c6f0275897-webhook-cert\") pod \"packageserver-d55dfcdfc-g8vp2\" (UID: \"81eb78f6-e7d7-4f21-b8a0-28c6f0275897\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221409 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6znf4\" (UniqueName: \"kubernetes.io/projected/81eb78f6-e7d7-4f21-b8a0-28c6f0275897-kube-api-access-6znf4\") pod \"packageserver-d55dfcdfc-g8vp2\" (UID: \"81eb78f6-e7d7-4f21-b8a0-28c6f0275897\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221436 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb63fa7c-3843-434c-97f9-4563b81f1b0d-config-volume\") pod \"dns-default-xpp8n\" (UID: \"fb63fa7c-3843-434c-97f9-4563b81f1b0d\") " pod="openshift-dns/dns-default-xpp8n" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221469 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdnxw\" (UniqueName: \"kubernetes.io/projected/0ddc381d-aa5f-48f3-af7d-71987e847670-kube-api-access-kdnxw\") pod \"service-ca-operator-777779d784-wsncx\" (UID: \"0ddc381d-aa5f-48f3-af7d-71987e847670\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsncx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221537 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc8bj\" (UniqueName: \"kubernetes.io/projected/38fca0ce-fe47-4830-9403-4148d1195b66-kube-api-access-tc8bj\") pod \"machine-config-server-lc4nq\" (UID: \"38fca0ce-fe47-4830-9403-4148d1195b66\") " pod="openshift-machine-config-operator/machine-config-server-lc4nq" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221556 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4da2091c-0d0d-47cd-9aa9-f6fc3a803b8d-cert\") pod \"ingress-canary-q2r4x\" (UID: \"4da2091c-0d0d-47cd-9aa9-f6fc3a803b8d\") " pod="openshift-ingress-canary/ingress-canary-q2r4x" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221587 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c1ade65-11e8-4529-9885-7630968a4b98-registry-certificates\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221605 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e0a20b0-a531-4f04-9cdd-d62131c816ed-config\") pod \"kube-apiserver-operator-766d6c64bb-84jsm\" (UID: \"6e0a20b0-a531-4f04-9cdd-d62131c816ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221625 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cba64ea-58ae-4563-a1a5-0958891339e5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8d997\" (UID: \"4cba64ea-58ae-4563-a1a5-0958891339e5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221645 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/73483387-af92-4cdd-872f-3bf8e62032b1-metrics-tls\") pod \"dns-operator-744455d44c-f8b8t\" (UID: \"73483387-af92-4cdd-872f-3bf8e62032b1\") " pod="openshift-dns-operator/dns-operator-744455d44c-f8b8t" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221663 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/16c36416-1b0e-493e-b349-3dbd7c007e29-metrics-certs\") pod \"router-default-5444994796-qlttx\" (UID: \"16c36416-1b0e-493e-b349-3dbd7c007e29\") " pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221832 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a91a43-cc29-4d11-b78b-27f24c8f89a1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5fwc2\" (UID: \"45a91a43-cc29-4d11-b78b-27f24c8f89a1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fwc2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221850 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/03e0e923-647d-4e57-975a-d4d3e2c22cb5-proxy-tls\") pod \"machine-config-controller-84d6567774-9k4bm\" (UID: \"03e0e923-647d-4e57-975a-d4d3e2c22cb5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221865 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38fca0ce-fe47-4830-9403-4148d1195b66-node-bootstrap-token\") pod \"machine-config-server-lc4nq\" (UID: \"38fca0ce-fe47-4830-9403-4148d1195b66\") " pod="openshift-machine-config-operator/machine-config-server-lc4nq" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221867 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c54546d6-cb67-47c3-97dc-36d0433d6066-trusted-ca\") pod \"console-operator-58897d9998-rkrb2\" (UID: \"c54546d6-cb67-47c3-97dc-36d0433d6066\") " pod="openshift-console-operator/console-operator-58897d9998-rkrb2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.221882 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bdcv5\" (UID: \"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.222002 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7989598a-648e-4c88-aeed-1a54f14f8eab-etcd-service-ca\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.222049 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be9c5230-fd8a-4d57-8cd7-2be1987a9aad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjcrp\" (UID: \"be9c5230-fd8a-4d57-8cd7-2be1987a9aad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.222096 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bdcv5\" (UID: \"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.222124 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80ecc549-e277-418f-bf45-873acf3b8794-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4wkf5\" (UID: \"80ecc549-e277-418f-bf45-873acf3b8794\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4wkf5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.222162 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84vc4\" (UniqueName: \"kubernetes.io/projected/4da2091c-0d0d-47cd-9aa9-f6fc3a803b8d-kube-api-access-84vc4\") pod \"ingress-canary-q2r4x\" (UID: \"4da2091c-0d0d-47cd-9aa9-f6fc3a803b8d\") " pod="openshift-ingress-canary/ingress-canary-q2r4x" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.222254 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.222290 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c1ade65-11e8-4529-9885-7630968a4b98-trusted-ca\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.222336 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzntd\" (UniqueName: \"kubernetes.io/projected/c54546d6-cb67-47c3-97dc-36d0433d6066-kube-api-access-vzntd\") pod \"console-operator-58897d9998-rkrb2\" (UID: \"c54546d6-cb67-47c3-97dc-36d0433d6066\") " pod="openshift-console-operator/console-operator-58897d9998-rkrb2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.222362 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/16c36416-1b0e-493e-b349-3dbd7c007e29-default-certificate\") pod \"router-default-5444994796-qlttx\" (UID: \"16c36416-1b0e-493e-b349-3dbd7c007e29\") " pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.222392 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdvg5\" (UniqueName: \"kubernetes.io/projected/03e0e923-647d-4e57-975a-d4d3e2c22cb5-kube-api-access-xdvg5\") pod \"machine-config-controller-84d6567774-9k4bm\" (UID: \"03e0e923-647d-4e57-975a-d4d3e2c22cb5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.222419 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42h2d\" (UniqueName: \"kubernetes.io/projected/bc59508e-6c7e-4810-97db-e651e8f021ba-kube-api-access-42h2d\") pod \"olm-operator-6b444d44fb-dfcjk\" (UID: \"bc59508e-6c7e-4810-97db-e651e8f021ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.222449 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4f36b66c-a595-4427-b08a-508b9bf5a27b-registration-dir\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.222483 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cba64ea-58ae-4563-a1a5-0958891339e5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8d997\" (UID: \"4cba64ea-58ae-4563-a1a5-0958891339e5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.222509 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be9c5230-fd8a-4d57-8cd7-2be1987a9aad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjcrp\" (UID: \"be9c5230-fd8a-4d57-8cd7-2be1987a9aad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp" Nov 24 11:18:52 crc kubenswrapper[4678]: E1124 11:18:52.228336 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:52.728306932 +0000 UTC m=+143.659366761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.235030 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c54546d6-cb67-47c3-97dc-36d0433d6066-config\") pod \"console-operator-58897d9998-rkrb2\" (UID: \"c54546d6-cb67-47c3-97dc-36d0433d6066\") " pod="openshift-console-operator/console-operator-58897d9998-rkrb2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.236071 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d72dd0f-a43c-4ee8-8a71-656141506c59-trusted-ca\") pod \"ingress-operator-5b745b69d9-9r9fl\" (UID: \"7d72dd0f-a43c-4ee8-8a71-656141506c59\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.239816 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7989598a-648e-4c88-aeed-1a54f14f8eab-etcd-ca\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.240315 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d72dd0f-a43c-4ee8-8a71-656141506c59-metrics-tls\") pod \"ingress-operator-5b745b69d9-9r9fl\" (UID: \"7d72dd0f-a43c-4ee8-8a71-656141506c59\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.241000 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/73483387-af92-4cdd-872f-3bf8e62032b1-metrics-tls\") pod \"dns-operator-744455d44c-f8b8t\" (UID: \"73483387-af92-4cdd-872f-3bf8e62032b1\") " pod="openshift-dns-operator/dns-operator-744455d44c-f8b8t" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.241076 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jb7bk"] Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.241545 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7989598a-648e-4c88-aeed-1a54f14f8eab-etcd-service-ca\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.243005 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c1ade65-11e8-4529-9885-7630968a4b98-trusted-ca\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.243286 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c1ade65-11e8-4529-9885-7630968a4b98-registry-certificates\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.245697 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c54546d6-cb67-47c3-97dc-36d0433d6066-serving-cert\") pod \"console-operator-58897d9998-rkrb2\" (UID: \"c54546d6-cb67-47c3-97dc-36d0433d6066\") " pod="openshift-console-operator/console-operator-58897d9998-rkrb2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.245907 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c1ade65-11e8-4529-9885-7630968a4b98-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.247087 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7989598a-648e-4c88-aeed-1a54f14f8eab-config\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.253095 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bqrc\" (UniqueName: \"kubernetes.io/projected/7d72dd0f-a43c-4ee8-8a71-656141506c59-kube-api-access-8bqrc\") pod \"ingress-operator-5b745b69d9-9r9fl\" (UID: \"7d72dd0f-a43c-4ee8-8a71-656141506c59\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.253542 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7989598a-648e-4c88-aeed-1a54f14f8eab-serving-cert\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.257186 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-registry-tls\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: W1124 11:18:52.258142 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57abb356_60a5_43ec_8ab0_07e2198a494d.slice/crio-e9c02b9367951fe992555daff4d37ffac9fd298ffaefe06b44a4648d56393e87 WatchSource:0}: Error finding container e9c02b9367951fe992555daff4d37ffac9fd298ffaefe06b44a4648d56393e87: Status 404 returned error can't find the container with id e9c02b9367951fe992555daff4d37ffac9fd298ffaefe06b44a4648d56393e87 Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.261252 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c1ade65-11e8-4529-9885-7630968a4b98-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.261523 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7989598a-648e-4c88-aeed-1a54f14f8eab-etcd-client\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.272910 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k"] Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.275813 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzntd\" (UniqueName: \"kubernetes.io/projected/c54546d6-cb67-47c3-97dc-36d0433d6066-kube-api-access-vzntd\") pod \"console-operator-58897d9998-rkrb2\" (UID: \"c54546d6-cb67-47c3-97dc-36d0433d6066\") " pod="openshift-console-operator/console-operator-58897d9998-rkrb2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.292254 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k"] Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.300187 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d72dd0f-a43c-4ee8-8a71-656141506c59-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9r9fl\" (UID: \"7d72dd0f-a43c-4ee8-8a71-656141506c59\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.323309 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.323555 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e0a20b0-a531-4f04-9cdd-d62131c816ed-config\") pod \"kube-apiserver-operator-766d6c64bb-84jsm\" (UID: \"6e0a20b0-a531-4f04-9cdd-d62131c816ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.323583 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/16c36416-1b0e-493e-b349-3dbd7c007e29-metrics-certs\") pod \"router-default-5444994796-qlttx\" (UID: \"16c36416-1b0e-493e-b349-3dbd7c007e29\") " pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.323614 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cba64ea-58ae-4563-a1a5-0958891339e5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8d997\" (UID: \"4cba64ea-58ae-4563-a1a5-0958891339e5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.323638 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a91a43-cc29-4d11-b78b-27f24c8f89a1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5fwc2\" (UID: \"45a91a43-cc29-4d11-b78b-27f24c8f89a1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fwc2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.323658 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/03e0e923-647d-4e57-975a-d4d3e2c22cb5-proxy-tls\") pod \"machine-config-controller-84d6567774-9k4bm\" (UID: \"03e0e923-647d-4e57-975a-d4d3e2c22cb5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm" Nov 24 11:18:52 crc kubenswrapper[4678]: E1124 11:18:52.323733 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:52.823694338 +0000 UTC m=+143.754754167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.323806 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38fca0ce-fe47-4830-9403-4148d1195b66-node-bootstrap-token\") pod \"machine-config-server-lc4nq\" (UID: \"38fca0ce-fe47-4830-9403-4148d1195b66\") " pod="openshift-machine-config-operator/machine-config-server-lc4nq" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.323879 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be9c5230-fd8a-4d57-8cd7-2be1987a9aad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjcrp\" (UID: \"be9c5230-fd8a-4d57-8cd7-2be1987a9aad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.323911 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bdcv5\" (UID: \"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.323937 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bdcv5\" (UID: \"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.323965 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84vc4\" (UniqueName: \"kubernetes.io/projected/4da2091c-0d0d-47cd-9aa9-f6fc3a803b8d-kube-api-access-84vc4\") pod \"ingress-canary-q2r4x\" (UID: \"4da2091c-0d0d-47cd-9aa9-f6fc3a803b8d\") " pod="openshift-ingress-canary/ingress-canary-q2r4x" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324022 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324059 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80ecc549-e277-418f-bf45-873acf3b8794-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4wkf5\" (UID: \"80ecc549-e277-418f-bf45-873acf3b8794\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4wkf5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324100 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/16c36416-1b0e-493e-b349-3dbd7c007e29-default-certificate\") pod \"router-default-5444994796-qlttx\" (UID: \"16c36416-1b0e-493e-b349-3dbd7c007e29\") " pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324132 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdvg5\" (UniqueName: \"kubernetes.io/projected/03e0e923-647d-4e57-975a-d4d3e2c22cb5-kube-api-access-xdvg5\") pod \"machine-config-controller-84d6567774-9k4bm\" (UID: \"03e0e923-647d-4e57-975a-d4d3e2c22cb5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324161 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42h2d\" (UniqueName: \"kubernetes.io/projected/bc59508e-6c7e-4810-97db-e651e8f021ba-kube-api-access-42h2d\") pod \"olm-operator-6b444d44fb-dfcjk\" (UID: \"bc59508e-6c7e-4810-97db-e651e8f021ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324189 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4f36b66c-a595-4427-b08a-508b9bf5a27b-registration-dir\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324221 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cba64ea-58ae-4563-a1a5-0958891339e5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8d997\" (UID: \"4cba64ea-58ae-4563-a1a5-0958891339e5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324249 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be9c5230-fd8a-4d57-8cd7-2be1987a9aad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjcrp\" (UID: \"be9c5230-fd8a-4d57-8cd7-2be1987a9aad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324290 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw4sh\" (UniqueName: \"kubernetes.io/projected/16c36416-1b0e-493e-b349-3dbd7c007e29-kube-api-access-vw4sh\") pod \"router-default-5444994796-qlttx\" (UID: \"16c36416-1b0e-493e-b349-3dbd7c007e29\") " pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324319 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/03e0e923-647d-4e57-975a-d4d3e2c22cb5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9k4bm\" (UID: \"03e0e923-647d-4e57-975a-d4d3e2c22cb5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324347 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea-signing-cabundle\") pod \"service-ca-9c57cc56f-6b4xb\" (UID: \"c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-6b4xb" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324380 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fb63fa7c-3843-434c-97f9-4563b81f1b0d-metrics-tls\") pod \"dns-default-xpp8n\" (UID: \"fb63fa7c-3843-434c-97f9-4563b81f1b0d\") " pod="openshift-dns/dns-default-xpp8n" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324407 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/39caea0d-552b-4862-a9fd-0c82865ba675-srv-cert\") pod \"catalog-operator-68c6474976-76dnj\" (UID: \"39caea0d-552b-4862-a9fd-0c82865ba675\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324445 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/16c36416-1b0e-493e-b349-3dbd7c007e29-stats-auth\") pod \"router-default-5444994796-qlttx\" (UID: \"16c36416-1b0e-493e-b349-3dbd7c007e29\") " pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324493 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bxsv\" (UniqueName: \"kubernetes.io/projected/8fa739d5-80cb-4afd-9ab9-850bf4a796d4-kube-api-access-4bxsv\") pod \"machine-config-operator-74547568cd-r5wl7\" (UID: \"8fa739d5-80cb-4afd-9ab9-850bf4a796d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" Nov 24 11:18:52 crc kubenswrapper[4678]: E1124 11:18:52.324502 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:52.824492181 +0000 UTC m=+143.755551810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324597 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wt9n\" (UniqueName: \"kubernetes.io/projected/c488212d-33c5-4863-b35f-a7764a62ccfb-kube-api-access-6wt9n\") pod \"package-server-manager-789f6589d5-mgcsk\" (UID: \"c488212d-33c5-4863-b35f-a7764a62ccfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324625 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfshk\" (UniqueName: \"kubernetes.io/projected/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-kube-api-access-mfshk\") pod \"marketplace-operator-79b997595-bdcv5\" (UID: \"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324651 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh92g\" (UniqueName: \"kubernetes.io/projected/be9c5230-fd8a-4d57-8cd7-2be1987a9aad-kube-api-access-hh92g\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjcrp\" (UID: \"be9c5230-fd8a-4d57-8cd7-2be1987a9aad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324710 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eec10a91-53ab-46cf-917c-5bbc191c0e68-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z8r6h\" (UID: \"eec10a91-53ab-46cf-917c-5bbc191c0e68\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324729 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6hvf\" (UniqueName: \"kubernetes.io/projected/fb63fa7c-3843-434c-97f9-4563b81f1b0d-kube-api-access-w6hvf\") pod \"dns-default-xpp8n\" (UID: \"fb63fa7c-3843-434c-97f9-4563b81f1b0d\") " pod="openshift-dns/dns-default-xpp8n" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324752 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx5cl\" (UniqueName: \"kubernetes.io/projected/ac428049-1481-4d93-acbc-d18a1b81b60c-kube-api-access-cx5cl\") pod \"migrator-59844c95c7-5t5cn\" (UID: \"ac428049-1481-4d93-acbc-d18a1b81b60c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5t5cn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324787 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eec10a91-53ab-46cf-917c-5bbc191c0e68-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z8r6h\" (UID: \"eec10a91-53ab-46cf-917c-5bbc191c0e68\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324806 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ddc381d-aa5f-48f3-af7d-71987e847670-serving-cert\") pod \"service-ca-operator-777779d784-wsncx\" (UID: \"0ddc381d-aa5f-48f3-af7d-71987e847670\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsncx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324826 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e0a20b0-a531-4f04-9cdd-d62131c816ed-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-84jsm\" (UID: \"6e0a20b0-a531-4f04-9cdd-d62131c816ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324846 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwrr2\" (UniqueName: \"kubernetes.io/projected/80ecc549-e277-418f-bf45-873acf3b8794-kube-api-access-dwrr2\") pod \"multus-admission-controller-857f4d67dd-4wkf5\" (UID: \"80ecc549-e277-418f-bf45-873acf3b8794\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4wkf5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324876 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4f36b66c-a595-4427-b08a-508b9bf5a27b-csi-data-dir\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324900 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea-signing-key\") pod \"service-ca-9c57cc56f-6b4xb\" (UID: \"c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-6b4xb" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324918 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0a20b0-a531-4f04-9cdd-d62131c816ed-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-84jsm\" (UID: \"6e0a20b0-a531-4f04-9cdd-d62131c816ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324939 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec10a91-53ab-46cf-917c-5bbc191c0e68-config\") pod \"kube-controller-manager-operator-78b949d7b-z8r6h\" (UID: \"eec10a91-53ab-46cf-917c-5bbc191c0e68\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324979 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tg5d\" (UniqueName: \"kubernetes.io/projected/45a91a43-cc29-4d11-b78b-27f24c8f89a1-kube-api-access-6tg5d\") pod \"control-plane-machine-set-operator-78cbb6b69f-5fwc2\" (UID: \"45a91a43-cc29-4d11-b78b-27f24c8f89a1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fwc2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.324998 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/81eb78f6-e7d7-4f21-b8a0-28c6f0275897-tmpfs\") pod \"packageserver-d55dfcdfc-g8vp2\" (UID: \"81eb78f6-e7d7-4f21-b8a0-28c6f0275897\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325015 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8fa739d5-80cb-4afd-9ab9-850bf4a796d4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r5wl7\" (UID: \"8fa739d5-80cb-4afd-9ab9-850bf4a796d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325039 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38fca0ce-fe47-4830-9403-4148d1195b66-certs\") pod \"machine-config-server-lc4nq\" (UID: \"38fca0ce-fe47-4830-9403-4148d1195b66\") " pod="openshift-machine-config-operator/machine-config-server-lc4nq" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325062 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8fa739d5-80cb-4afd-9ab9-850bf4a796d4-images\") pod \"machine-config-operator-74547568cd-r5wl7\" (UID: \"8fa739d5-80cb-4afd-9ab9-850bf4a796d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325082 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg7x4\" (UniqueName: \"kubernetes.io/projected/4f36b66c-a595-4427-b08a-508b9bf5a27b-kube-api-access-sg7x4\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325103 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c488212d-33c5-4863-b35f-a7764a62ccfb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mgcsk\" (UID: \"c488212d-33c5-4863-b35f-a7764a62ccfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325124 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b98jf\" (UniqueName: \"kubernetes.io/projected/39caea0d-552b-4862-a9fd-0c82865ba675-kube-api-access-b98jf\") pod \"catalog-operator-68c6474976-76dnj\" (UID: \"39caea0d-552b-4862-a9fd-0c82865ba675\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325159 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16c36416-1b0e-493e-b349-3dbd7c007e29-service-ca-bundle\") pod \"router-default-5444994796-qlttx\" (UID: \"16c36416-1b0e-493e-b349-3dbd7c007e29\") " pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325183 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/81eb78f6-e7d7-4f21-b8a0-28c6f0275897-apiservice-cert\") pod \"packageserver-d55dfcdfc-g8vp2\" (UID: \"81eb78f6-e7d7-4f21-b8a0-28c6f0275897\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325204 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4f36b66c-a595-4427-b08a-508b9bf5a27b-socket-dir\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325222 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/39caea0d-552b-4862-a9fd-0c82865ba675-profile-collector-cert\") pod \"catalog-operator-68c6474976-76dnj\" (UID: \"39caea0d-552b-4862-a9fd-0c82865ba675\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325236 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8fa739d5-80cb-4afd-9ab9-850bf4a796d4-proxy-tls\") pod \"machine-config-operator-74547568cd-r5wl7\" (UID: \"8fa739d5-80cb-4afd-9ab9-850bf4a796d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325259 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ddc381d-aa5f-48f3-af7d-71987e847670-config\") pod \"service-ca-operator-777779d784-wsncx\" (UID: \"0ddc381d-aa5f-48f3-af7d-71987e847670\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsncx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325275 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bc59508e-6c7e-4810-97db-e651e8f021ba-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dfcjk\" (UID: \"bc59508e-6c7e-4810-97db-e651e8f021ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325304 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4f36b66c-a595-4427-b08a-508b9bf5a27b-mountpoint-dir\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325327 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rcwz\" (UniqueName: \"kubernetes.io/projected/c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea-kube-api-access-7rcwz\") pod \"service-ca-9c57cc56f-6b4xb\" (UID: \"c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-6b4xb" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325342 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cba64ea-58ae-4563-a1a5-0958891339e5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8d997\" (UID: \"4cba64ea-58ae-4563-a1a5-0958891339e5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325358 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bc59508e-6c7e-4810-97db-e651e8f021ba-srv-cert\") pod \"olm-operator-6b444d44fb-dfcjk\" (UID: \"bc59508e-6c7e-4810-97db-e651e8f021ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325375 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4f36b66c-a595-4427-b08a-508b9bf5a27b-plugins-dir\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325406 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81eb78f6-e7d7-4f21-b8a0-28c6f0275897-webhook-cert\") pod \"packageserver-d55dfcdfc-g8vp2\" (UID: \"81eb78f6-e7d7-4f21-b8a0-28c6f0275897\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325422 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6znf4\" (UniqueName: \"kubernetes.io/projected/81eb78f6-e7d7-4f21-b8a0-28c6f0275897-kube-api-access-6znf4\") pod \"packageserver-d55dfcdfc-g8vp2\" (UID: \"81eb78f6-e7d7-4f21-b8a0-28c6f0275897\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325448 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb63fa7c-3843-434c-97f9-4563b81f1b0d-config-volume\") pod \"dns-default-xpp8n\" (UID: \"fb63fa7c-3843-434c-97f9-4563b81f1b0d\") " pod="openshift-dns/dns-default-xpp8n" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325483 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdnxw\" (UniqueName: \"kubernetes.io/projected/0ddc381d-aa5f-48f3-af7d-71987e847670-kube-api-access-kdnxw\") pod \"service-ca-operator-777779d784-wsncx\" (UID: \"0ddc381d-aa5f-48f3-af7d-71987e847670\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsncx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325506 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc8bj\" (UniqueName: \"kubernetes.io/projected/38fca0ce-fe47-4830-9403-4148d1195b66-kube-api-access-tc8bj\") pod \"machine-config-server-lc4nq\" (UID: \"38fca0ce-fe47-4830-9403-4148d1195b66\") " pod="openshift-machine-config-operator/machine-config-server-lc4nq" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325523 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4da2091c-0d0d-47cd-9aa9-f6fc3a803b8d-cert\") pod \"ingress-canary-q2r4x\" (UID: \"4da2091c-0d0d-47cd-9aa9-f6fc3a803b8d\") " pod="openshift-ingress-canary/ingress-canary-q2r4x" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.325770 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e0a20b0-a531-4f04-9cdd-d62131c816ed-config\") pod \"kube-apiserver-operator-766d6c64bb-84jsm\" (UID: \"6e0a20b0-a531-4f04-9cdd-d62131c816ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.326345 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bdcv5\" (UID: \"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.327752 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4f36b66c-a595-4427-b08a-508b9bf5a27b-mountpoint-dir\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.331369 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be9c5230-fd8a-4d57-8cd7-2be1987a9aad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjcrp\" (UID: \"be9c5230-fd8a-4d57-8cd7-2be1987a9aad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.332091 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eec10a91-53ab-46cf-917c-5bbc191c0e68-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z8r6h\" (UID: \"eec10a91-53ab-46cf-917c-5bbc191c0e68\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.334049 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ddc381d-aa5f-48f3-af7d-71987e847670-config\") pod \"service-ca-operator-777779d784-wsncx\" (UID: \"0ddc381d-aa5f-48f3-af7d-71987e847670\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsncx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.334894 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4f36b66c-a595-4427-b08a-508b9bf5a27b-socket-dir\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.335343 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4f36b66c-a595-4427-b08a-508b9bf5a27b-registration-dir\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.335550 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea-signing-cabundle\") pod \"service-ca-9c57cc56f-6b4xb\" (UID: \"c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-6b4xb" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.337306 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8fa739d5-80cb-4afd-9ab9-850bf4a796d4-images\") pod \"machine-config-operator-74547568cd-r5wl7\" (UID: \"8fa739d5-80cb-4afd-9ab9-850bf4a796d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.337811 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4f36b66c-a595-4427-b08a-508b9bf5a27b-plugins-dir\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.337917 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4f36b66c-a595-4427-b08a-508b9bf5a27b-csi-data-dir\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.339123 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb63fa7c-3843-434c-97f9-4563b81f1b0d-config-volume\") pod \"dns-default-xpp8n\" (UID: \"fb63fa7c-3843-434c-97f9-4563b81f1b0d\") " pod="openshift-dns/dns-default-xpp8n" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.339868 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8fa739d5-80cb-4afd-9ab9-850bf4a796d4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r5wl7\" (UID: \"8fa739d5-80cb-4afd-9ab9-850bf4a796d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.341009 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec10a91-53ab-46cf-917c-5bbc191c0e68-config\") pod \"kube-controller-manager-operator-78b949d7b-z8r6h\" (UID: \"eec10a91-53ab-46cf-917c-5bbc191c0e68\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.343725 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bc59508e-6c7e-4810-97db-e651e8f021ba-srv-cert\") pod \"olm-operator-6b444d44fb-dfcjk\" (UID: \"bc59508e-6c7e-4810-97db-e651e8f021ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.344014 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fb63fa7c-3843-434c-97f9-4563b81f1b0d-metrics-tls\") pod \"dns-default-xpp8n\" (UID: \"fb63fa7c-3843-434c-97f9-4563b81f1b0d\") " pod="openshift-dns/dns-default-xpp8n" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.344251 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81eb78f6-e7d7-4f21-b8a0-28c6f0275897-webhook-cert\") pod \"packageserver-d55dfcdfc-g8vp2\" (UID: \"81eb78f6-e7d7-4f21-b8a0-28c6f0275897\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.344368 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/03e0e923-647d-4e57-975a-d4d3e2c22cb5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9k4bm\" (UID: \"03e0e923-647d-4e57-975a-d4d3e2c22cb5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.344645 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0a20b0-a531-4f04-9cdd-d62131c816ed-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-84jsm\" (UID: \"6e0a20b0-a531-4f04-9cdd-d62131c816ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.344797 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/39caea0d-552b-4862-a9fd-0c82865ba675-profile-collector-cert\") pod \"catalog-operator-68c6474976-76dnj\" (UID: \"39caea0d-552b-4862-a9fd-0c82865ba675\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.345103 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8fa739d5-80cb-4afd-9ab9-850bf4a796d4-proxy-tls\") pod \"machine-config-operator-74547568cd-r5wl7\" (UID: \"8fa739d5-80cb-4afd-9ab9-850bf4a796d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.345359 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bc59508e-6c7e-4810-97db-e651e8f021ba-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dfcjk\" (UID: \"bc59508e-6c7e-4810-97db-e651e8f021ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.345371 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cba64ea-58ae-4563-a1a5-0958891339e5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8d997\" (UID: \"4cba64ea-58ae-4563-a1a5-0958891339e5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.345720 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/81eb78f6-e7d7-4f21-b8a0-28c6f0275897-tmpfs\") pod \"packageserver-d55dfcdfc-g8vp2\" (UID: \"81eb78f6-e7d7-4f21-b8a0-28c6f0275897\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.345750 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c488212d-33c5-4863-b35f-a7764a62ccfb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mgcsk\" (UID: \"c488212d-33c5-4863-b35f-a7764a62ccfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.346273 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cba64ea-58ae-4563-a1a5-0958891339e5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8d997\" (UID: \"4cba64ea-58ae-4563-a1a5-0958891339e5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.346355 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/39caea0d-552b-4862-a9fd-0c82865ba675-srv-cert\") pod \"catalog-operator-68c6474976-76dnj\" (UID: \"39caea0d-552b-4862-a9fd-0c82865ba675\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.346505 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16c36416-1b0e-493e-b349-3dbd7c007e29-service-ca-bundle\") pod \"router-default-5444994796-qlttx\" (UID: \"16c36416-1b0e-493e-b349-3dbd7c007e29\") " pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.354785 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a91a43-cc29-4d11-b78b-27f24c8f89a1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5fwc2\" (UID: \"45a91a43-cc29-4d11-b78b-27f24c8f89a1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fwc2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.356317 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl79j\" (UniqueName: \"kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-kube-api-access-rl79j\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.356349 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/16c36416-1b0e-493e-b349-3dbd7c007e29-metrics-certs\") pod \"router-default-5444994796-qlttx\" (UID: \"16c36416-1b0e-493e-b349-3dbd7c007e29\") " pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.356354 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38fca0ce-fe47-4830-9403-4148d1195b66-certs\") pod \"machine-config-server-lc4nq\" (UID: \"38fca0ce-fe47-4830-9403-4148d1195b66\") " pod="openshift-machine-config-operator/machine-config-server-lc4nq" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.357661 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80ecc549-e277-418f-bf45-873acf3b8794-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4wkf5\" (UID: \"80ecc549-e277-418f-bf45-873acf3b8794\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4wkf5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.359486 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/03e0e923-647d-4e57-975a-d4d3e2c22cb5-proxy-tls\") pod \"machine-config-controller-84d6567774-9k4bm\" (UID: \"03e0e923-647d-4e57-975a-d4d3e2c22cb5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.362444 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4da2091c-0d0d-47cd-9aa9-f6fc3a803b8d-cert\") pod \"ingress-canary-q2r4x\" (UID: \"4da2091c-0d0d-47cd-9aa9-f6fc3a803b8d\") " pod="openshift-ingress-canary/ingress-canary-q2r4x" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.366883 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zzwvq"] Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.367690 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tf9mj"] Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.368366 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bdcv5\" (UID: \"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.368407 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38fca0ce-fe47-4830-9403-4148d1195b66-node-bootstrap-token\") pod \"machine-config-server-lc4nq\" (UID: \"38fca0ce-fe47-4830-9403-4148d1195b66\") " pod="openshift-machine-config-operator/machine-config-server-lc4nq" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.368661 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/16c36416-1b0e-493e-b349-3dbd7c007e29-stats-auth\") pod \"router-default-5444994796-qlttx\" (UID: \"16c36416-1b0e-493e-b349-3dbd7c007e29\") " pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.368815 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ddc381d-aa5f-48f3-af7d-71987e847670-serving-cert\") pod \"service-ca-operator-777779d784-wsncx\" (UID: \"0ddc381d-aa5f-48f3-af7d-71987e847670\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsncx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.368516 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/16c36416-1b0e-493e-b349-3dbd7c007e29-default-certificate\") pod \"router-default-5444994796-qlttx\" (UID: \"16c36416-1b0e-493e-b349-3dbd7c007e29\") " pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.369383 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea-signing-key\") pod \"service-ca-9c57cc56f-6b4xb\" (UID: \"c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-6b4xb" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.370611 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be9c5230-fd8a-4d57-8cd7-2be1987a9aad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjcrp\" (UID: \"be9c5230-fd8a-4d57-8cd7-2be1987a9aad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.370906 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/81eb78f6-e7d7-4f21-b8a0-28c6f0275897-apiservice-cert\") pod \"packageserver-d55dfcdfc-g8vp2\" (UID: \"81eb78f6-e7d7-4f21-b8a0-28c6f0275897\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.379187 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f845\" (UniqueName: \"kubernetes.io/projected/73483387-af92-4cdd-872f-3bf8e62032b1-kube-api-access-2f845\") pod \"dns-operator-744455d44c-f8b8t\" (UID: \"73483387-af92-4cdd-872f-3bf8e62032b1\") " pod="openshift-dns-operator/dns-operator-744455d44c-f8b8t" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.389201 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-bound-sa-token\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.393808 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-chw9t"] Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.404871 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b928p\" (UniqueName: \"kubernetes.io/projected/7989598a-648e-4c88-aeed-1a54f14f8eab-kube-api-access-b928p\") pod \"etcd-operator-b45778765-dr4nh\" (UID: \"7989598a-648e-4c88-aeed-1a54f14f8eab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.407724 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn"] Nov 24 11:18:52 crc kubenswrapper[4678]: W1124 11:18:52.411988 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38101ae8_9e21_4a62_b839_cc42e0562769.slice/crio-0e8e3fc47eb350b153d883c87c4ba354dbb2ff870269e5049d32cc6a4f857ee8 WatchSource:0}: Error finding container 0e8e3fc47eb350b153d883c87c4ba354dbb2ff870269e5049d32cc6a4f857ee8: Status 404 returned error can't find the container with id 0e8e3fc47eb350b153d883c87c4ba354dbb2ff870269e5049d32cc6a4f857ee8 Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.426585 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:52 crc kubenswrapper[4678]: E1124 11:18:52.427739 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:52.927715635 +0000 UTC m=+143.858775274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.433106 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bxsv\" (UniqueName: \"kubernetes.io/projected/8fa739d5-80cb-4afd-9ab9-850bf4a796d4-kube-api-access-4bxsv\") pod \"machine-config-operator-74547568cd-r5wl7\" (UID: \"8fa739d5-80cb-4afd-9ab9-850bf4a796d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.447803 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.449845 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cba64ea-58ae-4563-a1a5-0958891339e5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8d997\" (UID: \"4cba64ea-58ae-4563-a1a5-0958891339e5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.454133 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-f8b8t" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.461449 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj"] Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.465086 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rkrb2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.472187 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wt9n\" (UniqueName: \"kubernetes.io/projected/c488212d-33c5-4863-b35f-a7764a62ccfb-kube-api-access-6wt9n\") pod \"package-server-manager-789f6589d5-mgcsk\" (UID: \"c488212d-33c5-4863-b35f-a7764a62ccfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.477610 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" Nov 24 11:18:52 crc kubenswrapper[4678]: W1124 11:18:52.487825 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddaea8216_5097_43f5_913a_eda16abaf508.slice/crio-938c4aba87a4c7e300879af406b1fb35b49d1adb6b8b878d75def08dc4915421 WatchSource:0}: Error finding container 938c4aba87a4c7e300879af406b1fb35b49d1adb6b8b878d75def08dc4915421: Status 404 returned error can't find the container with id 938c4aba87a4c7e300879af406b1fb35b49d1adb6b8b878d75def08dc4915421 Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.489454 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84vc4\" (UniqueName: \"kubernetes.io/projected/4da2091c-0d0d-47cd-9aa9-f6fc3a803b8d-kube-api-access-84vc4\") pod \"ingress-canary-q2r4x\" (UID: \"4da2091c-0d0d-47cd-9aa9-f6fc3a803b8d\") " pod="openshift-ingress-canary/ingress-canary-q2r4x" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.508417 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw4sh\" (UniqueName: \"kubernetes.io/projected/16c36416-1b0e-493e-b349-3dbd7c007e29-kube-api-access-vw4sh\") pod \"router-default-5444994796-qlttx\" (UID: \"16c36416-1b0e-493e-b349-3dbd7c007e29\") " pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.528788 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: E1124 11:18:52.529216 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:53.029196319 +0000 UTC m=+143.960255958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.535738 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfshk\" (UniqueName: \"kubernetes.io/projected/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-kube-api-access-mfshk\") pod \"marketplace-operator-79b997595-bdcv5\" (UID: \"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b\") " pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.552162 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh92g\" (UniqueName: \"kubernetes.io/projected/be9c5230-fd8a-4d57-8cd7-2be1987a9aad-kube-api-access-hh92g\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjcrp\" (UID: \"be9c5230-fd8a-4d57-8cd7-2be1987a9aad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.556515 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.569201 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b98jf\" (UniqueName: \"kubernetes.io/projected/39caea0d-552b-4862-a9fd-0c82865ba675-kube-api-access-b98jf\") pod \"catalog-operator-68c6474976-76dnj\" (UID: \"39caea0d-552b-4862-a9fd-0c82865ba675\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.590549 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.594517 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rcwz\" (UniqueName: \"kubernetes.io/projected/c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea-kube-api-access-7rcwz\") pod \"service-ca-9c57cc56f-6b4xb\" (UID: \"c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-6b4xb" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.621868 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6hvf\" (UniqueName: \"kubernetes.io/projected/fb63fa7c-3843-434c-97f9-4563b81f1b0d-kube-api-access-w6hvf\") pod \"dns-default-xpp8n\" (UID: \"fb63fa7c-3843-434c-97f9-4563b81f1b0d\") " pod="openshift-dns/dns-default-xpp8n" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.637401 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:52 crc kubenswrapper[4678]: E1124 11:18:52.638217 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:53.138195753 +0000 UTC m=+144.069255392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.638367 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.640561 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eec10a91-53ab-46cf-917c-5bbc191c0e68-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z8r6h\" (UID: \"eec10a91-53ab-46cf-917c-5bbc191c0e68\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.651220 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.668013 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xpp8n" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.672213 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdvg5\" (UniqueName: \"kubernetes.io/projected/03e0e923-647d-4e57-975a-d4d3e2c22cb5-kube-api-access-xdvg5\") pod \"machine-config-controller-84d6567774-9k4bm\" (UID: \"03e0e923-647d-4e57-975a-d4d3e2c22cb5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.689819 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwrr2\" (UniqueName: \"kubernetes.io/projected/80ecc549-e277-418f-bf45-873acf3b8794-kube-api-access-dwrr2\") pod \"multus-admission-controller-857f4d67dd-4wkf5\" (UID: \"80ecc549-e277-418f-bf45-873acf3b8794\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4wkf5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.694335 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg7x4\" (UniqueName: \"kubernetes.io/projected/4f36b66c-a595-4427-b08a-508b9bf5a27b-kube-api-access-sg7x4\") pod \"csi-hostpathplugin-ftpl8\" (UID: \"4f36b66c-a595-4427-b08a-508b9bf5a27b\") " pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.707994 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42h2d\" (UniqueName: \"kubernetes.io/projected/bc59508e-6c7e-4810-97db-e651e8f021ba-kube-api-access-42h2d\") pod \"olm-operator-6b444d44fb-dfcjk\" (UID: \"bc59508e-6c7e-4810-97db-e651e8f021ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.711937 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.711964 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q2r4x" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.732447 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" event={"ID":"9216c066-ab74-4299-b586-92eba3e4d36a","Type":"ContainerStarted","Data":"b9435d3e733352486e93b79c030580a069ecd8eff96f4e868130127007ed5332"} Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.732493 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" event={"ID":"9216c066-ab74-4299-b586-92eba3e4d36a","Type":"ContainerStarted","Data":"fb03c86695b6e11bbfd6a51be383fdd42b785c31d59646df73f86e477dcb6ae1"} Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.738895 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: E1124 11:18:52.739402 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:53.239385507 +0000 UTC m=+144.170445146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.741418 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e0a20b0-a531-4f04-9cdd-d62131c816ed-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-84jsm\" (UID: \"6e0a20b0-a531-4f04-9cdd-d62131c816ed\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.744569 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zzwvq" event={"ID":"fef47a87-3f60-4ee1-a31e-b02583fc2819","Type":"ContainerStarted","Data":"aba273be0eac781334affad1541d2a03880bf5e00f820ed4687bdaac3f3ae616"} Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.750268 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl"] Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.753091 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tg5d\" (UniqueName: \"kubernetes.io/projected/45a91a43-cc29-4d11-b78b-27f24c8f89a1-kube-api-access-6tg5d\") pod \"control-plane-machine-set-operator-78cbb6b69f-5fwc2\" (UID: \"45a91a43-cc29-4d11-b78b-27f24c8f89a1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fwc2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.763244 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" event={"ID":"3d0acb73-5437-44f1-a83e-2a3781acce52","Type":"ContainerDied","Data":"1e15e4f9d962a3fe17133f416f7d494914fe636e7b517952fa0ef090ed678192"} Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.759139 4678 generic.go:334] "Generic (PLEG): container finished" podID="3d0acb73-5437-44f1-a83e-2a3781acce52" containerID="1e15e4f9d962a3fe17133f416f7d494914fe636e7b517952fa0ef090ed678192" exitCode=0 Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.768441 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" event={"ID":"3d0acb73-5437-44f1-a83e-2a3781acce52","Type":"ContainerStarted","Data":"6729d8b030b52d43b00268a450a4f0c2cb715d74afa957ad519700c1cdb81363"} Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.777357 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6znf4\" (UniqueName: \"kubernetes.io/projected/81eb78f6-e7d7-4f21-b8a0-28c6f0275897-kube-api-access-6znf4\") pod \"packageserver-d55dfcdfc-g8vp2\" (UID: \"81eb78f6-e7d7-4f21-b8a0-28c6f0275897\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.785186 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.787315 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k" event={"ID":"902681dd-c0f3-4fda-8d56-c3fff7e3fcec","Type":"ContainerStarted","Data":"2dadb9e6d5b956143c710d99df2d729982a0d70e342b6d2331dcf5e145bb1609"} Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.787365 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k" event={"ID":"902681dd-c0f3-4fda-8d56-c3fff7e3fcec","Type":"ContainerStarted","Data":"d31c7e3d273f88ccc8c54fa05d30e5f85ed522547919f0a87707ddce1b19a280"} Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.791855 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.798281 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" event={"ID":"64cfe70c-3f37-4f26-b699-d8229dba4508","Type":"ContainerStarted","Data":"ac90aacb6d523a34cf7fe8bf82d6aa1e0511564aa1c28d8f3e31f3f225b25109"} Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.800791 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6b4xb" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.804713 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdnxw\" (UniqueName: \"kubernetes.io/projected/0ddc381d-aa5f-48f3-af7d-71987e847670-kube-api-access-kdnxw\") pod \"service-ca-operator-777779d784-wsncx\" (UID: \"0ddc381d-aa5f-48f3-af7d-71987e847670\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsncx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.811806 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsncx" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.812481 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" event={"ID":"57abb356-60a5-43ec-8ab0-07e2198a494d","Type":"ContainerStarted","Data":"973e0acdda148a2eba5df6125d80d75a6d9557b44cad59533c2e983895f0b4ba"} Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.812530 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" event={"ID":"57abb356-60a5-43ec-8ab0-07e2198a494d","Type":"ContainerStarted","Data":"e9c02b9367951fe992555daff4d37ffac9fd298ffaefe06b44a4648d56393e87"} Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.813309 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc8bj\" (UniqueName: \"kubernetes.io/projected/38fca0ce-fe47-4830-9403-4148d1195b66-kube-api-access-tc8bj\") pod \"machine-config-server-lc4nq\" (UID: \"38fca0ce-fe47-4830-9403-4148d1195b66\") " pod="openshift-machine-config-operator/machine-config-server-lc4nq" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.814307 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.825470 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.836449 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx5cl\" (UniqueName: \"kubernetes.io/projected/ac428049-1481-4d93-acbc-d18a1b81b60c-kube-api-access-cx5cl\") pod \"migrator-59844c95c7-5t5cn\" (UID: \"ac428049-1481-4d93-acbc-d18a1b81b60c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5t5cn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.840044 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.841203 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:52 crc kubenswrapper[4678]: E1124 11:18:52.841872 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:53.341838959 +0000 UTC m=+144.272898598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.841980 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.853840 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k" event={"ID":"974b621b-6635-4ca8-b53d-b15ae31b51b0","Type":"ContainerStarted","Data":"6a004e5e49e357d89c95e07fd80b69b5ad344d8162fb3fab17067166a0d1d257"} Nov 24 11:18:52 crc kubenswrapper[4678]: E1124 11:18:52.854493 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:53.354452338 +0000 UTC m=+144.285511977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.856818 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5t5cn" Nov 24 11:18:52 crc kubenswrapper[4678]: W1124 11:18:52.861568 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d72dd0f_a43c_4ee8_8a71_656141506c59.slice/crio-f67081ba28ff0e9febd3f52bff68ff5c2249a1a6ef6a47e9113e286c35f620f3 WatchSource:0}: Error finding container f67081ba28ff0e9febd3f52bff68ff5c2249a1a6ef6a47e9113e286c35f620f3: Status 404 returned error can't find the container with id f67081ba28ff0e9febd3f52bff68ff5c2249a1a6ef6a47e9113e286c35f620f3 Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.863028 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.900206 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g" event={"ID":"2169100e-5122-411b-9cb1-4d1ae0ebbd86","Type":"ContainerStarted","Data":"5e83bdb0a6586aacf7666d0e6b5d29b35e9ac8736a3e80700aa1110a8b7f8197"} Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.900571 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g" event={"ID":"2169100e-5122-411b-9cb1-4d1ae0ebbd86","Type":"ContainerStarted","Data":"f500d7bdf02f0db3d69fc9625b4c5f8195eeeb8fad994ae11800c75f3e08dabc"} Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.906349 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-chw9t" event={"ID":"38101ae8-9e21-4a62-b839-cc42e0562769","Type":"ContainerStarted","Data":"0e8e3fc47eb350b153d883c87c4ba354dbb2ff870269e5049d32cc6a4f857ee8"} Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.908709 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4wkf5" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.919573 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.922234 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" event={"ID":"430a7abd-f5ce-4886-b79a-436d715e3e1b","Type":"ContainerStarted","Data":"f24c3fa244b68f7631d3ba8135ac050035b648c82f7bf8cce253ee0bcb1b0f7d"} Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.926012 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.944149 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:52 crc kubenswrapper[4678]: E1124 11:18:52.946457 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:53.446437974 +0000 UTC m=+144.377497613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.959220 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fwc2" Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.960593 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" event={"ID":"019dfbed-3859-4761-890e-cd8205747454","Type":"ContainerStarted","Data":"0b473e10bcc98d7a8a8ada1a91fd204b7e763e0afeb35bb0d03adea7a1e9ec61"} Nov 24 11:18:52 crc kubenswrapper[4678]: I1124 11:18:52.977790 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lc4nq" Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.022325 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" event={"ID":"daea8216-5097-43f5-913a-eda16abaf508","Type":"ContainerStarted","Data":"938c4aba87a4c7e300879af406b1fb35b49d1adb6b8b878d75def08dc4915421"} Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.057517 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.058517 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rkrb2"] Nov 24 11:18:53 crc kubenswrapper[4678]: E1124 11:18:53.062064 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:53.56204546 +0000 UTC m=+144.493105099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.080786 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f8b8t"] Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.086459 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dr4nh"] Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.183090 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:53 crc kubenswrapper[4678]: E1124 11:18:53.183582 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:53.683561059 +0000 UTC m=+144.614620698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.275929 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" podStartSLOduration=122.275907155 podStartE2EDuration="2m2.275907155s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:53.275148224 +0000 UTC m=+144.206207863" watchObservedRunningTime="2025-11-24 11:18:53.275907155 +0000 UTC m=+144.206966794" Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.295656 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:53 crc kubenswrapper[4678]: E1124 11:18:53.296088 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:53.796073824 +0000 UTC m=+144.727133463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.316313 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" podStartSLOduration=122.316297285 podStartE2EDuration="2m2.316297285s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:53.315400729 +0000 UTC m=+144.246460368" watchObservedRunningTime="2025-11-24 11:18:53.316297285 +0000 UTC m=+144.247356924" Nov 24 11:18:53 crc kubenswrapper[4678]: W1124 11:18:53.328128 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc54546d6_cb67_47c3_97dc_36d0433d6066.slice/crio-e0c020c20267f3152a8d6c5b5ada0737c06d7fd1db50cd96840111ae15ec7045 WatchSource:0}: Error finding container e0c020c20267f3152a8d6c5b5ada0737c06d7fd1db50cd96840111ae15ec7045: Status 404 returned error can't find the container with id e0c020c20267f3152a8d6c5b5ada0737c06d7fd1db50cd96840111ae15ec7045 Nov 24 11:18:53 crc kubenswrapper[4678]: W1124 11:18:53.328832 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73483387_af92_4cdd_872f_3bf8e62032b1.slice/crio-f11ca06aec73357827178f6d155894fd00fbd74a24564e4ce70298ed8485ac47 WatchSource:0}: Error finding container f11ca06aec73357827178f6d155894fd00fbd74a24564e4ce70298ed8485ac47: Status 404 returned error can't find the container with id f11ca06aec73357827178f6d155894fd00fbd74a24564e4ce70298ed8485ac47 Nov 24 11:18:53 crc kubenswrapper[4678]: W1124 11:18:53.350865 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7989598a_648e_4c88_aeed_1a54f14f8eab.slice/crio-fd1daa2d5120e491a1ac7d1afd4a80a4ad78821c9749180e8ff8675b162fc690 WatchSource:0}: Error finding container fd1daa2d5120e491a1ac7d1afd4a80a4ad78821c9749180e8ff8675b162fc690: Status 404 returned error can't find the container with id fd1daa2d5120e491a1ac7d1afd4a80a4ad78821c9749180e8ff8675b162fc690 Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.397895 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:53 crc kubenswrapper[4678]: E1124 11:18:53.398416 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:53.898396453 +0000 UTC m=+144.829456092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.400348 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997"] Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.437569 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gb9k" podStartSLOduration=122.437532846 podStartE2EDuration="2m2.437532846s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:53.43082975 +0000 UTC m=+144.361889389" watchObservedRunningTime="2025-11-24 11:18:53.437532846 +0000 UTC m=+144.368592475" Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.448507 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7"] Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.459048 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ftpl8"] Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.467870 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bdcv5"] Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.499510 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:53 crc kubenswrapper[4678]: E1124 11:18:53.499953 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:53.999939918 +0000 UTC m=+144.930999557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.555013 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h"] Nov 24 11:18:53 crc kubenswrapper[4678]: W1124 11:18:53.576263 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cba64ea_58ae_4563_a1a5_0958891339e5.slice/crio-7563f75376d6dc5472e088b09493424e032d1103662218bd807fd9e888a22d35 WatchSource:0}: Error finding container 7563f75376d6dc5472e088b09493424e032d1103662218bd807fd9e888a22d35: Status 404 returned error can't find the container with id 7563f75376d6dc5472e088b09493424e032d1103662218bd807fd9e888a22d35 Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.587975 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk"] Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.600546 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xpp8n"] Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.601701 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:53 crc kubenswrapper[4678]: E1124 11:18:53.602278 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:54.102258966 +0000 UTC m=+145.033318595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.704248 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:53 crc kubenswrapper[4678]: E1124 11:18:53.704781 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:54.204763399 +0000 UTC m=+145.135823038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.738839 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" podStartSLOduration=122.738816434 podStartE2EDuration="2m2.738816434s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:53.737357702 +0000 UTC m=+144.668417341" watchObservedRunningTime="2025-11-24 11:18:53.738816434 +0000 UTC m=+144.669876073" Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.805470 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:53 crc kubenswrapper[4678]: E1124 11:18:53.805882 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:54.305863742 +0000 UTC m=+145.236923381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:53 crc kubenswrapper[4678]: I1124 11:18:53.909389 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:53 crc kubenswrapper[4678]: E1124 11:18:53.910233 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:54.410219179 +0000 UTC m=+145.341278818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.012179 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:54 crc kubenswrapper[4678]: E1124 11:18:54.012442 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:54.512399183 +0000 UTC m=+145.443458822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.056501 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h" event={"ID":"eec10a91-53ab-46cf-917c-5bbc191c0e68","Type":"ContainerStarted","Data":"32a32a11cc8214f35e563390083a1ce3172e0b69a031e8997803178ffff62731"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.065874 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" event={"ID":"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b","Type":"ContainerStarted","Data":"fe1a1e3da06157b9ec2f45ef28000cea8af335c05c22929784a9c507f3830139"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.102452 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" event={"ID":"430a7abd-f5ce-4886-b79a-436d715e3e1b","Type":"ContainerStarted","Data":"1bbb022d82aebd166f378fef2137f608a5ab3eb62d938996d7c537e497a3b732"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.107870 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lc4nq" event={"ID":"38fca0ce-fe47-4830-9403-4148d1195b66","Type":"ContainerStarted","Data":"6c670a263c57c20923020d054ed19efc5edff43924b85b2e701384742e024ed9"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.113625 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:54 crc kubenswrapper[4678]: E1124 11:18:54.114066 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:54.614051542 +0000 UTC m=+145.545111171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.126543 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" event={"ID":"4f36b66c-a595-4427-b08a-508b9bf5a27b","Type":"ContainerStarted","Data":"dab557c6ba53f42eea6f26e386009de4bd4ce01816c1ac01fbef64cea5b3e982"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.133359 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk" event={"ID":"c488212d-33c5-4863-b35f-a7764a62ccfb","Type":"ContainerStarted","Data":"3894d4b9f4548d2e345d9a44aa5f46d0a676a5d32c96f3172e0d4bad0873bba5"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.140925 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-2qlj9" podStartSLOduration=123.140908107 podStartE2EDuration="2m3.140908107s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:54.139057743 +0000 UTC m=+145.070117382" watchObservedRunningTime="2025-11-24 11:18:54.140908107 +0000 UTC m=+145.071967746" Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.157998 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" event={"ID":"9216c066-ab74-4299-b586-92eba3e4d36a","Type":"ContainerStarted","Data":"da3c2a490ff4ad80c595002d4c6e1af4c1d353e2fd5bfd23430b60114b7ff88c"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.171798 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6v64g" podStartSLOduration=123.171781658 podStartE2EDuration="2m3.171781658s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:54.170620864 +0000 UTC m=+145.101680503" watchObservedRunningTime="2025-11-24 11:18:54.171781658 +0000 UTC m=+145.102841297" Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.187981 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zzwvq" event={"ID":"fef47a87-3f60-4ee1-a31e-b02583fc2819","Type":"ContainerStarted","Data":"f2dacf9652dc56c2026d0a3ecd4a36f43a9565ea384b01ea040e06d950cc952a"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.188411 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zzwvq" Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.189951 4678 patch_prober.go:28] interesting pod/downloads-7954f5f757-zzwvq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.189995 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zzwvq" podUID="fef47a87-3f60-4ee1-a31e-b02583fc2819" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.195017 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qlttx" event={"ID":"16c36416-1b0e-493e-b349-3dbd7c007e29","Type":"ContainerStarted","Data":"df8253f3217f841d56eb248527c0c0b20db99e5b571e98d85a8b4a7d885f8d26"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.196804 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f8b8t" event={"ID":"73483387-af92-4cdd-872f-3bf8e62032b1","Type":"ContainerStarted","Data":"f11ca06aec73357827178f6d155894fd00fbd74a24564e4ce70298ed8485ac47"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.198134 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" event={"ID":"8fa739d5-80cb-4afd-9ab9-850bf4a796d4","Type":"ContainerStarted","Data":"1558c4fae3d048951151644431c2fe52d361d077afdeba917b527875728f5e54"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.199688 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rkrb2" event={"ID":"c54546d6-cb67-47c3-97dc-36d0433d6066","Type":"ContainerStarted","Data":"e0c020c20267f3152a8d6c5b5ada0737c06d7fd1db50cd96840111ae15ec7045"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.205859 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" event={"ID":"daea8216-5097-43f5-913a-eda16abaf508","Type":"ContainerStarted","Data":"795be823b1b1551d8ba9b667b4101d5059f40c8d7daa8be3adc7ead041418d4f"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.207576 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" event={"ID":"64cfe70c-3f37-4f26-b699-d8229dba4508","Type":"ContainerStarted","Data":"1e78a05c6a11c7a2b32c071a5d3ce66545fae1267eecff498c30c04ab6c91cac"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.210944 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k" event={"ID":"974b621b-6635-4ca8-b53d-b15ae31b51b0","Type":"ContainerStarted","Data":"7cb13d7a4bca6bdde24816603f2f57b7a3899337e198a7b0384b21aa2fd7a73f"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.219580 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:54 crc kubenswrapper[4678]: E1124 11:18:54.221047 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:54.721010096 +0000 UTC m=+145.652069745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.223393 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:54 crc kubenswrapper[4678]: E1124 11:18:54.225111 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:54.725090855 +0000 UTC m=+145.656150494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.236658 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997" event={"ID":"4cba64ea-58ae-4563-a1a5-0958891339e5","Type":"ContainerStarted","Data":"7563f75376d6dc5472e088b09493424e032d1103662218bd807fd9e888a22d35"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.285913 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-chw9t" event={"ID":"38101ae8-9e21-4a62-b839-cc42e0562769","Type":"ContainerStarted","Data":"138260386cc840cb703f878bfa5634564534899ce2f347157ea66e9b1af25ebe"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.312962 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xpp8n" event={"ID":"fb63fa7c-3843-434c-97f9-4563b81f1b0d","Type":"ContainerStarted","Data":"a580c5593b1b3afa1b39347d049b8774c4759ac0a9b40491ed1ecb822b0de82e"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.327318 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:54 crc kubenswrapper[4678]: E1124 11:18:54.327421 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:54.827402933 +0000 UTC m=+145.758462572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.327731 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:54 crc kubenswrapper[4678]: E1124 11:18:54.329302 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:54.829294227 +0000 UTC m=+145.760353866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.389398 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-jb7bk" podStartSLOduration=123.389372792 podStartE2EDuration="2m3.389372792s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:54.357326186 +0000 UTC m=+145.288385825" watchObservedRunningTime="2025-11-24 11:18:54.389372792 +0000 UTC m=+145.320432431" Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.429100 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:54 crc kubenswrapper[4678]: E1124 11:18:54.446555 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:54.94648368 +0000 UTC m=+145.877543389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.455946 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" event={"ID":"019dfbed-3859-4761-890e-cd8205747454","Type":"ContainerStarted","Data":"430ea12bc18953e8cb5d4557604c66d96ad46ed46ddaea527f1d9791cbd09686"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.457092 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.472295 4678 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-tf9mj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" start-of-body= Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.472414 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" podUID="019dfbed-3859-4761-890e-cd8205747454" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.534349 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.544307 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" event={"ID":"7989598a-648e-4c88-aeed-1a54f14f8eab","Type":"ContainerStarted","Data":"fd1daa2d5120e491a1ac7d1afd4a80a4ad78821c9749180e8ff8675b162fc690"} Nov 24 11:18:54 crc kubenswrapper[4678]: E1124 11:18:54.547318 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:55.047290894 +0000 UTC m=+145.978350543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.550307 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" event={"ID":"7d72dd0f-a43c-4ee8-8a71-656141506c59","Type":"ContainerStarted","Data":"e0c26828e8380925c1bc563b1c716e0be37699026ccb1b4ecaca48a7c94707e3"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.550356 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" event={"ID":"7d72dd0f-a43c-4ee8-8a71-656141506c59","Type":"ContainerStarted","Data":"f67081ba28ff0e9febd3f52bff68ff5c2249a1a6ef6a47e9113e286c35f620f3"} Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.653248 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:54 crc kubenswrapper[4678]: E1124 11:18:54.654929 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:55.154912187 +0000 UTC m=+146.085971826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.737995 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" podStartSLOduration=123.737978323 podStartE2EDuration="2m3.737978323s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:54.737321094 +0000 UTC m=+145.668380743" watchObservedRunningTime="2025-11-24 11:18:54.737978323 +0000 UTC m=+145.669037962" Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.757002 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:54 crc kubenswrapper[4678]: E1124 11:18:54.757365 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:55.257352588 +0000 UTC m=+146.188412227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.767715 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q2r4x"] Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.810085 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" podStartSLOduration=123.810054088 podStartE2EDuration="2m3.810054088s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:54.807689089 +0000 UTC m=+145.738748718" watchObservedRunningTime="2025-11-24 11:18:54.810054088 +0000 UTC m=+145.741113727" Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.850628 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b2wdn" podStartSLOduration=123.850607532 podStartE2EDuration="2m3.850607532s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:54.845012289 +0000 UTC m=+145.776071958" watchObservedRunningTime="2025-11-24 11:18:54.850607532 +0000 UTC m=+145.781667171" Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.854596 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wsncx"] Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.860473 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:54 crc kubenswrapper[4678]: E1124 11:18:54.861095 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:55.361065167 +0000 UTC m=+146.292124806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.889203 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" podStartSLOduration=123.889179518 podStartE2EDuration="2m3.889179518s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:54.885837251 +0000 UTC m=+145.816896890" watchObservedRunningTime="2025-11-24 11:18:54.889179518 +0000 UTC m=+145.820239157" Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.890894 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp"] Nov 24 11:18:54 crc kubenswrapper[4678]: W1124 11:18:54.906824 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4da2091c_0d0d_47cd_9aa9_f6fc3a803b8d.slice/crio-bef30fe76b2a88ceb63f20dda7203c6c1f4aa5cd9307e6f508c9f5f3ddcc0c0d WatchSource:0}: Error finding container bef30fe76b2a88ceb63f20dda7203c6c1f4aa5cd9307e6f508c9f5f3ddcc0c0d: Status 404 returned error can't find the container with id bef30fe76b2a88ceb63f20dda7203c6c1f4aa5cd9307e6f508c9f5f3ddcc0c0d Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.936480 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-zzwvq" podStartSLOduration=123.93614123 podStartE2EDuration="2m3.93614123s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:54.932688089 +0000 UTC m=+145.863747728" watchObservedRunningTime="2025-11-24 11:18:54.93614123 +0000 UTC m=+145.867200869" Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.964041 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-chw9t" podStartSLOduration=123.964018494 podStartE2EDuration="2m3.964018494s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:54.963615192 +0000 UTC m=+145.894674831" watchObservedRunningTime="2025-11-24 11:18:54.964018494 +0000 UTC m=+145.895078153" Nov 24 11:18:54 crc kubenswrapper[4678]: I1124 11:18:54.964793 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:54 crc kubenswrapper[4678]: E1124 11:18:54.965352 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:55.465331343 +0000 UTC m=+146.396390992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.066475 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:55 crc kubenswrapper[4678]: E1124 11:18:55.066623 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:55.566587239 +0000 UTC m=+146.497646878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.067178 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:55 crc kubenswrapper[4678]: E1124 11:18:55.067601 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:55.567593459 +0000 UTC m=+146.498653098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.091051 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.092214 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.106508 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g4p7d" podStartSLOduration=124.106480034 podStartE2EDuration="2m4.106480034s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:55.07995257 +0000 UTC m=+146.011012209" watchObservedRunningTime="2025-11-24 11:18:55.106480034 +0000 UTC m=+146.037539673" Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.111589 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6b4xb"] Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.124317 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5t5cn"] Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.156849 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:55 crc kubenswrapper[4678]: W1124 11:18:55.159792 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac428049_1481_4d93_acbc_d18a1b81b60c.slice/crio-854e28ca50ad20083cf59ec0c81c6d5cf5e3a5e0773cfccc0c0124c3204ff6ae WatchSource:0}: Error finding container 854e28ca50ad20083cf59ec0c81c6d5cf5e3a5e0773cfccc0c0124c3204ff6ae: Status 404 returned error can't find the container with id 854e28ca50ad20083cf59ec0c81c6d5cf5e3a5e0773cfccc0c0124c3204ff6ae Nov 24 11:18:55 crc kubenswrapper[4678]: W1124 11:18:55.166435 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe9c5230_fd8a_4d57_8cd7_2be1987a9aad.slice/crio-410a5bcbd72d1736c8f860f73f12fba70c25e721bdaa165ef72039a0d69d7ef2 WatchSource:0}: Error finding container 410a5bcbd72d1736c8f860f73f12fba70c25e721bdaa165ef72039a0d69d7ef2: Status 404 returned error can't find the container with id 410a5bcbd72d1736c8f860f73f12fba70c25e721bdaa165ef72039a0d69d7ef2 Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.180475 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:55 crc kubenswrapper[4678]: E1124 11:18:55.180639 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:55.680611789 +0000 UTC m=+146.611671428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.181110 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:55 crc kubenswrapper[4678]: E1124 11:18:55.183462 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:55.683447502 +0000 UTC m=+146.614507141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.231445 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2"] Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.285586 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:55 crc kubenswrapper[4678]: E1124 11:18:55.286147 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:55.78611474 +0000 UTC m=+146.717174379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:55 crc kubenswrapper[4678]: W1124 11:18:55.294272 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2a930cd_9bf0_43d4_8a97_3bb2c0c7f6ea.slice/crio-ba103b9a80248a5bd00a604e7c1f68ba727b8078fa557345d2393adef1239f77 WatchSource:0}: Error finding container ba103b9a80248a5bd00a604e7c1f68ba727b8078fa557345d2393adef1239f77: Status 404 returned error can't find the container with id ba103b9a80248a5bd00a604e7c1f68ba727b8078fa557345d2393adef1239f77 Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.311471 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm"] Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.388309 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:55 crc kubenswrapper[4678]: E1124 11:18:55.388853 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:55.88883323 +0000 UTC m=+146.819892859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.391179 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm"] Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.438199 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4wkf5"] Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.498553 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:55 crc kubenswrapper[4678]: E1124 11:18:55.499057 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:55.999034328 +0000 UTC m=+146.930093967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.517886 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.518374 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.553755 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fwc2"] Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.555301 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk"] Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.562742 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj"] Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.586087 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" event={"ID":"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b","Type":"ContainerStarted","Data":"7f0543476d371c0e0cc91fe8a57cda49d205661a390c3546503957abd47b7b26"} Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.589417 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.590145 4678 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bdcv5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.590260 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" podUID="83dee7d1-b6d5-4c51-9b88-84e4d35fe70b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.599977 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:55 crc kubenswrapper[4678]: E1124 11:18:55.600796 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:56.10077902 +0000 UTC m=+147.031838659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.653855 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" event={"ID":"7d72dd0f-a43c-4ee8-8a71-656141506c59","Type":"ContainerStarted","Data":"ee060ecc1584515914c30d5eab60225895346ac1196e3cd9883c44a6c4353d67"} Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.690569 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm" event={"ID":"6e0a20b0-a531-4f04-9cdd-d62131c816ed","Type":"ContainerStarted","Data":"4007c9bf77cb6f52c43b6e07526a02199f76b6b6b186629743afef3ef738c589"} Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.692867 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" podStartSLOduration=124.692841528 podStartE2EDuration="2m4.692841528s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:55.691432357 +0000 UTC m=+146.622491996" watchObservedRunningTime="2025-11-24 11:18:55.692841528 +0000 UTC m=+146.623901167" Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.711123 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:55 crc kubenswrapper[4678]: E1124 11:18:55.713066 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:56.213034658 +0000 UTC m=+147.144094297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.719508 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rkrb2" event={"ID":"c54546d6-cb67-47c3-97dc-36d0433d6066","Type":"ContainerStarted","Data":"b2232c1b0ff1f7d6a74fa2d8248b39860579a1d2e8829ae58957ba32c7ec1e73"} Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.722129 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-rkrb2" Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.722338 4678 patch_prober.go:28] interesting pod/console-operator-58897d9998-rkrb2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.722481 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rkrb2" podUID="c54546d6-cb67-47c3-97dc-36d0433d6066" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.735863 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm" event={"ID":"03e0e923-647d-4e57-975a-d4d3e2c22cb5","Type":"ContainerStarted","Data":"7f07cbf09f0c8dfd9aaa9ccc58f73a00643677c5cc7836fb01485c543e785d3b"} Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.764750 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" event={"ID":"81eb78f6-e7d7-4f21-b8a0-28c6f0275897","Type":"ContainerStarted","Data":"65aa43ad8c8dd753e26d6a991f7fe63411ead5de9d1bef2c738ff3c512bcbae2"} Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.782126 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk" event={"ID":"c488212d-33c5-4863-b35f-a7764a62ccfb","Type":"ContainerStarted","Data":"1202f006c8eb1830f916a682d09fb6d026cce465303376ab58d74f1f3d3e9c31"} Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.798077 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5t5cn" event={"ID":"ac428049-1481-4d93-acbc-d18a1b81b60c","Type":"ContainerStarted","Data":"854e28ca50ad20083cf59ec0c81c6d5cf5e3a5e0773cfccc0c0124c3204ff6ae"} Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.815821 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:55 crc kubenswrapper[4678]: E1124 11:18:55.817391 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:56.317371494 +0000 UTC m=+147.248431133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.845168 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" event={"ID":"8fa739d5-80cb-4afd-9ab9-850bf4a796d4","Type":"ContainerStarted","Data":"637bdba5c92e4a79094a06f1d0f827b766ddad2add9b40af71202bdf17709500"} Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.845334 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" event={"ID":"8fa739d5-80cb-4afd-9ab9-850bf4a796d4","Type":"ContainerStarted","Data":"b4a7a67c808ed9048f4a120e1ff32012f6e613fd267181edfa2c7489d3bf6dff"} Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.876127 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp" event={"ID":"be9c5230-fd8a-4d57-8cd7-2be1987a9aad","Type":"ContainerStarted","Data":"410a5bcbd72d1736c8f860f73f12fba70c25e721bdaa165ef72039a0d69d7ef2"} Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.902319 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9r9fl" podStartSLOduration=124.902292685 podStartE2EDuration="2m4.902292685s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:55.814330446 +0000 UTC m=+146.745390085" watchObservedRunningTime="2025-11-24 11:18:55.902292685 +0000 UTC m=+146.833352324" Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.918427 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:55 crc kubenswrapper[4678]: E1124 11:18:55.919954 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:56.419926139 +0000 UTC m=+147.350985778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.939103 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5wl7" podStartSLOduration=124.93908256899999 podStartE2EDuration="2m4.939082569s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:55.938315557 +0000 UTC m=+146.869375196" watchObservedRunningTime="2025-11-24 11:18:55.939082569 +0000 UTC m=+146.870142208" Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.939429 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-rkrb2" podStartSLOduration=124.93942409900001 podStartE2EDuration="2m4.939424099s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:55.901942505 +0000 UTC m=+146.833002144" watchObservedRunningTime="2025-11-24 11:18:55.939424099 +0000 UTC m=+146.870483738" Nov 24 11:18:55 crc kubenswrapper[4678]: I1124 11:18:55.966040 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lc4nq" event={"ID":"38fca0ce-fe47-4830-9403-4148d1195b66","Type":"ContainerStarted","Data":"9e9084ca35c1e6f15ad7c23806f763dbecc52d54e1bbdbbafa63e042402f9251"} Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.003014 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997" podStartSLOduration=125.002987745 podStartE2EDuration="2m5.002987745s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:56.000357759 +0000 UTC m=+146.931417408" watchObservedRunningTime="2025-11-24 11:18:56.002987745 +0000 UTC m=+146.934047384" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.016018 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsncx" event={"ID":"0ddc381d-aa5f-48f3-af7d-71987e847670","Type":"ContainerStarted","Data":"e5a7c48baa4979f16d66b46695262df31b1bb5ee73a9d3873fb01e78648eddd8"} Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.029034 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:56 crc kubenswrapper[4678]: E1124 11:18:56.030529 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:56.530513379 +0000 UTC m=+147.461573018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.090042 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qlttx" event={"ID":"16c36416-1b0e-493e-b349-3dbd7c007e29","Type":"ContainerStarted","Data":"243e5432ade027610dc0e2c25dc064994cee798f2f933873ed5606ae44d176ad"} Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.111402 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xpp8n" event={"ID":"fb63fa7c-3843-434c-97f9-4563b81f1b0d","Type":"ContainerStarted","Data":"e2f04474c9491cf8239b35d4cc7182b821648bf77898394deb9ede524131eca5"} Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.135832 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsncx" podStartSLOduration=125.135806344 podStartE2EDuration="2m5.135806344s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:56.133345262 +0000 UTC m=+147.064404901" watchObservedRunningTime="2025-11-24 11:18:56.135806344 +0000 UTC m=+147.066865983" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.136213 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-lc4nq" podStartSLOduration=7.136205076 podStartE2EDuration="7.136205076s" podCreationTimestamp="2025-11-24 11:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:56.060011141 +0000 UTC m=+146.991070790" watchObservedRunningTime="2025-11-24 11:18:56.136205076 +0000 UTC m=+147.067264715" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.141218 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:56 crc kubenswrapper[4678]: E1124 11:18:56.143554 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:56.643523999 +0000 UTC m=+147.574583638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.145951 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" event={"ID":"3d0acb73-5437-44f1-a83e-2a3781acce52","Type":"ContainerStarted","Data":"009e5b9efbdc184aee65ecd97902806745310d0a0d48251ff660a54a996e8013"} Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.154430 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.207054 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k" event={"ID":"974b621b-6635-4ca8-b53d-b15ae31b51b0","Type":"ContainerStarted","Data":"7d68eb4af10dbac6ba7b84ea80329c5dfb2da7813d5f05d233efc2abe830c1ae"} Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.243395 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.274518 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qlttx" podStartSLOduration=125.274490414 podStartE2EDuration="2m5.274490414s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:56.219833848 +0000 UTC m=+147.150893487" watchObservedRunningTime="2025-11-24 11:18:56.274490414 +0000 UTC m=+147.205550043" Nov 24 11:18:56 crc kubenswrapper[4678]: E1124 11:18:56.277425 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:56.777394769 +0000 UTC m=+147.708454408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.324660 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" event={"ID":"7989598a-648e-4c88-aeed-1a54f14f8eab","Type":"ContainerStarted","Data":"a71223b9deef96b174c6c26c566ce051c15f09dcb9dfc234166b6d4e05ddf98a"} Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.341417 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k" podStartSLOduration=125.341385367 podStartE2EDuration="2m5.341385367s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:56.273836585 +0000 UTC m=+147.204896224" watchObservedRunningTime="2025-11-24 11:18:56.341385367 +0000 UTC m=+147.272445006" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.345562 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:56 crc kubenswrapper[4678]: E1124 11:18:56.345640 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:56.845621931 +0000 UTC m=+147.776681570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.364841 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:56 crc kubenswrapper[4678]: E1124 11:18:56.365240 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:56.865226073 +0000 UTC m=+147.796285712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.362257 4678 patch_prober.go:28] interesting pod/apiserver-76f77b778f-kl8pj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 24 11:18:56 crc kubenswrapper[4678]: [+]log ok Nov 24 11:18:56 crc kubenswrapper[4678]: [+]etcd ok Nov 24 11:18:56 crc kubenswrapper[4678]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 24 11:18:56 crc kubenswrapper[4678]: [+]poststarthook/generic-apiserver-start-informers ok Nov 24 11:18:56 crc kubenswrapper[4678]: [+]poststarthook/max-in-flight-filter ok Nov 24 11:18:56 crc kubenswrapper[4678]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 24 11:18:56 crc kubenswrapper[4678]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 24 11:18:56 crc kubenswrapper[4678]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 24 11:18:56 crc kubenswrapper[4678]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 24 11:18:56 crc kubenswrapper[4678]: [+]poststarthook/project.openshift.io-projectcache ok Nov 24 11:18:56 crc kubenswrapper[4678]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 24 11:18:56 crc kubenswrapper[4678]: [+]poststarthook/openshift.io-startinformers ok Nov 24 11:18:56 crc kubenswrapper[4678]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 24 11:18:56 crc kubenswrapper[4678]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 24 11:18:56 crc kubenswrapper[4678]: livez check failed Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.365626 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" podUID="430a7abd-f5ce-4886-b79a-436d715e3e1b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.377523 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" podStartSLOduration=125.377502632 podStartE2EDuration="2m5.377502632s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:56.324466743 +0000 UTC m=+147.255526382" watchObservedRunningTime="2025-11-24 11:18:56.377502632 +0000 UTC m=+147.308562271" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.397476 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-dr4nh" podStartSLOduration=125.397450685 podStartE2EDuration="2m5.397450685s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:56.378289465 +0000 UTC m=+147.309349104" watchObservedRunningTime="2025-11-24 11:18:56.397450685 +0000 UTC m=+147.328510324" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.458920 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f8b8t" event={"ID":"73483387-af92-4cdd-872f-3bf8e62032b1","Type":"ContainerStarted","Data":"d5cbfc74ed9bc21538c82aa718d7f9e07d004992f2f4713bb080661dba0088b4"} Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.466377 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:56 crc kubenswrapper[4678]: E1124 11:18:56.466767 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:56.966748359 +0000 UTC m=+147.897807998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.466862 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:56 crc kubenswrapper[4678]: E1124 11:18:56.468653 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:56.968645464 +0000 UTC m=+147.899705103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.493938 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4wkf5" event={"ID":"80ecc549-e277-418f-bf45-873acf3b8794","Type":"ContainerStarted","Data":"831892bcd1907b0717e244f6591b96c6429c3fc28497ae9c00642b6f17d06ad2"} Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.565027 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h" event={"ID":"eec10a91-53ab-46cf-917c-5bbc191c0e68","Type":"ContainerStarted","Data":"66c6163e8fd58c1d14e7478585b2c9ceb872dc0439b637d0364bfa19d92b3c78"} Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.569106 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:56 crc kubenswrapper[4678]: E1124 11:18:56.574577 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:57.074554347 +0000 UTC m=+148.005613986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.599227 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6b4xb" event={"ID":"c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea","Type":"ContainerStarted","Data":"ba103b9a80248a5bd00a604e7c1f68ba727b8078fa557345d2393adef1239f77"} Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.617267 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8r6h" podStartSLOduration=125.617245823 podStartE2EDuration="2m5.617245823s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:56.615854642 +0000 UTC m=+147.546914281" watchObservedRunningTime="2025-11-24 11:18:56.617245823 +0000 UTC m=+147.548305462" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.618802 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-f8b8t" podStartSLOduration=125.618660704 podStartE2EDuration="2m5.618660704s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:56.494373556 +0000 UTC m=+147.425433195" watchObservedRunningTime="2025-11-24 11:18:56.618660704 +0000 UTC m=+147.549720343" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.627783 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q2r4x" event={"ID":"4da2091c-0d0d-47cd-9aa9-f6fc3a803b8d","Type":"ContainerStarted","Data":"bef30fe76b2a88ceb63f20dda7203c6c1f4aa5cd9307e6f508c9f5f3ddcc0c0d"} Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.631661 4678 patch_prober.go:28] interesting pod/downloads-7954f5f757-zzwvq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.631733 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zzwvq" podUID="fef47a87-3f60-4ee1-a31e-b02583fc2819" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.638566 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.642935 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hw6d8" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.674403 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:56 crc kubenswrapper[4678]: E1124 11:18:56.675744 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:57.175728721 +0000 UTC m=+148.106788360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.689180 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-q2r4x" podStartSLOduration=7.689152834 podStartE2EDuration="7.689152834s" podCreationTimestamp="2025-11-24 11:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:56.687275309 +0000 UTC m=+147.618334948" watchObservedRunningTime="2025-11-24 11:18:56.689152834 +0000 UTC m=+147.620212473" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.775840 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:56 crc kubenswrapper[4678]: E1124 11:18:56.778087 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:57.278048749 +0000 UTC m=+148.209108568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.794131 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.815010 4678 patch_prober.go:28] interesting pod/router-default-5444994796-qlttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:18:56 crc kubenswrapper[4678]: [-]has-synced failed: reason withheld Nov 24 11:18:56 crc kubenswrapper[4678]: [+]process-running ok Nov 24 11:18:56 crc kubenswrapper[4678]: healthz check failed Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.815086 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlttx" podUID="16c36416-1b0e-493e-b349-3dbd7c007e29" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.880088 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:56 crc kubenswrapper[4678]: E1124 11:18:56.880584 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:57.380567494 +0000 UTC m=+148.311627123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:56 crc kubenswrapper[4678]: I1124 11:18:56.982718 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:56 crc kubenswrapper[4678]: E1124 11:18:56.983261 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:57.483235942 +0000 UTC m=+148.414295581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.086064 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:57 crc kubenswrapper[4678]: E1124 11:18:57.086919 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:57.586698083 +0000 UTC m=+148.517757722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.187025 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:57 crc kubenswrapper[4678]: E1124 11:18:57.187490 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:57.687473786 +0000 UTC m=+148.618533425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.288964 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:57 crc kubenswrapper[4678]: E1124 11:18:57.289532 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:57.789503146 +0000 UTC m=+148.720562985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.390754 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:57 crc kubenswrapper[4678]: E1124 11:18:57.391195 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:57.891141744 +0000 UTC m=+148.822201383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.492863 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.492936 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.492957 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.493052 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.493107 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:57 crc kubenswrapper[4678]: E1124 11:18:57.493588 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:57.993568995 +0000 UTC m=+148.924628634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.497061 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.500747 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.500816 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.509541 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.594483 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:57 crc kubenswrapper[4678]: E1124 11:18:57.594812 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:58.094774321 +0000 UTC m=+149.025833960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.594891 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:57 crc kubenswrapper[4678]: E1124 11:18:57.595330 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:58.095309736 +0000 UTC m=+149.026369375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.615169 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.621448 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.667081 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" event={"ID":"81eb78f6-e7d7-4f21-b8a0-28c6f0275897","Type":"ContainerStarted","Data":"0af25232a9eb189a524ef2165ebc71f9529e5d0aed5028bebb42456b32c2e594"} Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.669078 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.671001 4678 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g8vp2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.671079 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" podUID="81eb78f6-e7d7-4f21-b8a0-28c6f0275897" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.697326 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:57 crc kubenswrapper[4678]: E1124 11:18:57.697704 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:58.197681086 +0000 UTC m=+149.128740725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.715782 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.731523 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xpp8n" event={"ID":"fb63fa7c-3843-434c-97f9-4563b81f1b0d","Type":"ContainerStarted","Data":"7ece03ad85731719a7ee61f3a6ec44cf9baf6657d8082b5f1cf7e3c30200d5d5"} Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.731819 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xpp8n" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.761200 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fwc2" event={"ID":"45a91a43-cc29-4d11-b78b-27f24c8f89a1","Type":"ContainerStarted","Data":"5b43af70b1bb9789e3f08ac92b46cfd94c30f3d37d54c6f8caf7f82dff87d9eb"} Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.761279 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fwc2" event={"ID":"45a91a43-cc29-4d11-b78b-27f24c8f89a1","Type":"ContainerStarted","Data":"568d524c5884bd1f5bf3d20e6ca7f2269f5172781e9f346e05fd4f287ad671dc"} Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.777937 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xpp8n" podStartSLOduration=8.777906518 podStartE2EDuration="8.777906518s" podCreationTimestamp="2025-11-24 11:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:57.776306392 +0000 UTC m=+148.707366041" watchObservedRunningTime="2025-11-24 11:18:57.777906518 +0000 UTC m=+148.708966157" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.779133 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" podStartSLOduration=126.779120364 podStartE2EDuration="2m6.779120364s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:57.724616972 +0000 UTC m=+148.655676611" watchObservedRunningTime="2025-11-24 11:18:57.779120364 +0000 UTC m=+148.710180003" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.791843 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f8b8t" event={"ID":"73483387-af92-4cdd-872f-3bf8e62032b1","Type":"ContainerStarted","Data":"55ba78ebc8c9ae36a15115cade3c9984de655ab9b97fcc61270bed1f81cde056"} Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.794390 4678 patch_prober.go:28] interesting pod/router-default-5444994796-qlttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:18:57 crc kubenswrapper[4678]: [-]has-synced failed: reason withheld Nov 24 11:18:57 crc kubenswrapper[4678]: [+]process-running ok Nov 24 11:18:57 crc kubenswrapper[4678]: healthz check failed Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.794633 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlttx" podUID="16c36416-1b0e-493e-b349-3dbd7c007e29" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.799218 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:57 crc kubenswrapper[4678]: E1124 11:18:57.799536 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:58.29952457 +0000 UTC m=+149.230584209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.807200 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk" event={"ID":"c488212d-33c5-4863-b35f-a7764a62ccfb","Type":"ContainerStarted","Data":"28fe0c42168f6507ebd3f8352a40a174d126efea4f0c64d71b11ae01f4419d57"} Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.807844 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.817195 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fwc2" podStartSLOduration=126.817172005 podStartE2EDuration="2m6.817172005s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:57.814992532 +0000 UTC m=+148.746052171" watchObservedRunningTime="2025-11-24 11:18:57.817172005 +0000 UTC m=+148.748231644" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.821046 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xz9nm" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.858768 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" event={"ID":"bc59508e-6c7e-4810-97db-e651e8f021ba","Type":"ContainerStarted","Data":"a8be02c89d54c2033006dd95476afdf74d86bc344a537298461a7eb7e8fe00fe"} Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.858819 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" event={"ID":"bc59508e-6c7e-4810-97db-e651e8f021ba","Type":"ContainerStarted","Data":"f08769602986bc284733027298c59f4563e827b4e6c494e346368d2eefa87020"} Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.859704 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.862518 4678 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-dfcjk container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.862577 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" podUID="bc59508e-6c7e-4810-97db-e651e8f021ba" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.864529 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk" podStartSLOduration=126.864498507 podStartE2EDuration="2m6.864498507s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:57.859988316 +0000 UTC m=+148.791047945" watchObservedRunningTime="2025-11-24 11:18:57.864498507 +0000 UTC m=+148.795558146" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.877852 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5t5cn" event={"ID":"ac428049-1481-4d93-acbc-d18a1b81b60c","Type":"ContainerStarted","Data":"c935398d7136514417705b72deb70a09380772c76fb30e627a5643ebd8974bff"} Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.877969 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5t5cn" event={"ID":"ac428049-1481-4d93-acbc-d18a1b81b60c","Type":"ContainerStarted","Data":"e2f0f3400356dec13b06d34f862e07ff857e043fdac8f06e90cf4840efd9c610"} Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.885996 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" podStartSLOduration=126.885979845 podStartE2EDuration="2m6.885979845s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:57.885402228 +0000 UTC m=+148.816461867" watchObservedRunningTime="2025-11-24 11:18:57.885979845 +0000 UTC m=+148.817039484" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.903859 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:57 crc kubenswrapper[4678]: E1124 11:18:57.905809 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:58.405787944 +0000 UTC m=+149.336847583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.908831 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6b4xb" event={"ID":"c2a930cd-9bf0-43d4-8a97-3bb2c0c7f6ea","Type":"ContainerStarted","Data":"f00ad2fcbc2c1934d3acd0077067d94299cdd27e7b7ad831ae7aa2f36f451ff6"} Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.950705 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm" event={"ID":"03e0e923-647d-4e57-975a-d4d3e2c22cb5","Type":"ContainerStarted","Data":"ad5daf9be6fdf447e6b70dc94e28cd1ed3ee8cbbae10b78dea73c6ece2525bf7"} Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.950755 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm" event={"ID":"03e0e923-647d-4e57-975a-d4d3e2c22cb5","Type":"ContainerStarted","Data":"27a2d1d08a697ca6a64bbb9f2b84d31db76f6eb7b4810d1558c833128432d391"} Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.957481 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-6b4xb" podStartSLOduration=126.957462803 podStartE2EDuration="2m6.957462803s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:57.956274228 +0000 UTC m=+148.887333867" watchObservedRunningTime="2025-11-24 11:18:57.957462803 +0000 UTC m=+148.888522442" Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.977973 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8d997" event={"ID":"4cba64ea-58ae-4563-a1a5-0958891339e5","Type":"ContainerStarted","Data":"080c1c643bd02859f5ee02e7b0221910f58bc866700ba536027bcf6c2de298be"} Nov 24 11:18:57 crc kubenswrapper[4678]: I1124 11:18:57.995854 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5t5cn" podStartSLOduration=126.995829613 podStartE2EDuration="2m6.995829613s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:57.989724425 +0000 UTC m=+148.920784054" watchObservedRunningTime="2025-11-24 11:18:57.995829613 +0000 UTC m=+148.926889252" Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.002373 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp" event={"ID":"be9c5230-fd8a-4d57-8cd7-2be1987a9aad","Type":"ContainerStarted","Data":"951e5067dc8ef993615e6f673b37bff42d6b3bf694c513546da9edab1133008e"} Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.006930 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:58 crc kubenswrapper[4678]: E1124 11:18:58.008725 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:58.508703949 +0000 UTC m=+149.439763588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.017017 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm" event={"ID":"6e0a20b0-a531-4f04-9cdd-d62131c816ed","Type":"ContainerStarted","Data":"4c6228ce4399c2616106cc91bf01195ea911574bb9ebaa284dfe5d3cb4ea8bb6"} Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.032871 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsncx" event={"ID":"0ddc381d-aa5f-48f3-af7d-71987e847670","Type":"ContainerStarted","Data":"e3414bd20cd233b45f5278ea61233dde59d169adf97affcc355bf95926b8627d"} Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.036437 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9k4bm" podStartSLOduration=127.036411418 podStartE2EDuration="2m7.036411418s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:58.034146081 +0000 UTC m=+148.965205720" watchObservedRunningTime="2025-11-24 11:18:58.036411418 +0000 UTC m=+148.967471057" Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.060086 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4wkf5" event={"ID":"80ecc549-e277-418f-bf45-873acf3b8794","Type":"ContainerStarted","Data":"e542fa3287fb57b751a341f82536960cce95ff61de5207ee9e36c7f6746352c3"} Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.074443 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-84jsm" podStartSLOduration=127.074410257 podStartE2EDuration="2m7.074410257s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:58.073332666 +0000 UTC m=+149.004392315" watchObservedRunningTime="2025-11-24 11:18:58.074410257 +0000 UTC m=+149.005469896" Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.089775 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" event={"ID":"4f36b66c-a595-4427-b08a-508b9bf5a27b","Type":"ContainerStarted","Data":"4e844529627cf1d2215b302ab004c6ad63d60ec77ec4dc266f4e0ff3df9e209c"} Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.108846 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:58 crc kubenswrapper[4678]: E1124 11:18:58.110404 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:58.610358138 +0000 UTC m=+149.541417777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.110883 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" event={"ID":"39caea0d-552b-4862-a9fd-0c82865ba675","Type":"ContainerStarted","Data":"c300406616f1f1f9ca0c47eb1f92e3b3d0204c56a6689a58d1db2395d9f4fbbd"} Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.110937 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" event={"ID":"39caea0d-552b-4862-a9fd-0c82865ba675","Type":"ContainerStarted","Data":"5c4e752b3d237e7e4982033c47bea97e45692c4d9ceb3c5819bf2e7b84ced0a0"} Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.111824 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.122327 4678 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-76dnj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.122394 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" podUID="39caea0d-552b-4862-a9fd-0c82865ba675" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.123605 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjcrp" podStartSLOduration=127.123581993 podStartE2EDuration="2m7.123581993s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:58.122131251 +0000 UTC m=+149.053190890" watchObservedRunningTime="2025-11-24 11:18:58.123581993 +0000 UTC m=+149.054641632" Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.153294 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q2r4x" event={"ID":"4da2091c-0d0d-47cd-9aa9-f6fc3a803b8d","Type":"ContainerStarted","Data":"ed16877dc629fa949dc4c61f456699a354e534bf04491465987d69ba0d78532b"} Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.159138 4678 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bdcv5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.159183 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" podUID="83dee7d1-b6d5-4c51-9b88-84e4d35fe70b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.202427 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-4wkf5" podStartSLOduration=127.202403026 podStartE2EDuration="2m7.202403026s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:58.170586846 +0000 UTC m=+149.101646485" watchObservedRunningTime="2025-11-24 11:18:58.202403026 +0000 UTC m=+149.133462665" Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.213239 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:58 crc kubenswrapper[4678]: E1124 11:18:58.215019 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:58.715006014 +0000 UTC m=+149.646065643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.309682 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-rkrb2" Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.317293 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:58 crc kubenswrapper[4678]: E1124 11:18:58.318275 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:58.818253568 +0000 UTC m=+149.749313197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.343578 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" podStartSLOduration=127.343552467 podStartE2EDuration="2m7.343552467s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:18:58.217441684 +0000 UTC m=+149.148501313" watchObservedRunningTime="2025-11-24 11:18:58.343552467 +0000 UTC m=+149.274612106" Nov 24 11:18:58 crc kubenswrapper[4678]: W1124 11:18:58.406498 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-ca15b624f95ac261963c434beeb5cb43e7a18434735852458ce8e34d9ac724a2 WatchSource:0}: Error finding container ca15b624f95ac261963c434beeb5cb43e7a18434735852458ce8e34d9ac724a2: Status 404 returned error can't find the container with id ca15b624f95ac261963c434beeb5cb43e7a18434735852458ce8e34d9ac724a2 Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.419969 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:58 crc kubenswrapper[4678]: E1124 11:18:58.420417 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:58.920400721 +0000 UTC m=+149.851460360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.523112 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:58 crc kubenswrapper[4678]: E1124 11:18:58.523695 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:59.023646067 +0000 UTC m=+149.954705716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:58 crc kubenswrapper[4678]: W1124 11:18:58.623144 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-d1353013d952915c8d9b2c32e3e9328432207f9915953b0ca4294fba2470142a WatchSource:0}: Error finding container d1353013d952915c8d9b2c32e3e9328432207f9915953b0ca4294fba2470142a: Status 404 returned error can't find the container with id d1353013d952915c8d9b2c32e3e9328432207f9915953b0ca4294fba2470142a Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.624872 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:58 crc kubenswrapper[4678]: E1124 11:18:58.625434 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:59.125415539 +0000 UTC m=+150.056475178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.728587 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:58 crc kubenswrapper[4678]: E1124 11:18:58.728771 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:59.228734617 +0000 UTC m=+150.159794256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.728927 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:58 crc kubenswrapper[4678]: E1124 11:18:58.729392 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:59.229377865 +0000 UTC m=+150.160437504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.794026 4678 patch_prober.go:28] interesting pod/router-default-5444994796-qlttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:18:58 crc kubenswrapper[4678]: [-]has-synced failed: reason withheld Nov 24 11:18:58 crc kubenswrapper[4678]: [+]process-running ok Nov 24 11:18:58 crc kubenswrapper[4678]: healthz check failed Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.795587 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlttx" podUID="16c36416-1b0e-493e-b349-3dbd7c007e29" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.829920 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:58 crc kubenswrapper[4678]: E1124 11:18:58.830141 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:59.330102506 +0000 UTC m=+150.261162145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.830264 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:58 crc kubenswrapper[4678]: E1124 11:18:58.830578 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:59.33056193 +0000 UTC m=+150.261621569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.931475 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:58 crc kubenswrapper[4678]: E1124 11:18:58.931720 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:59.431685502 +0000 UTC m=+150.362745161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:58 crc kubenswrapper[4678]: I1124 11:18:58.932400 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:58 crc kubenswrapper[4678]: E1124 11:18:58.932782 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:59.432774345 +0000 UTC m=+150.363833984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.033532 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:59 crc kubenswrapper[4678]: E1124 11:18:59.033774 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:59.533726143 +0000 UTC m=+150.464785782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.033976 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:59 crc kubenswrapper[4678]: E1124 11:18:59.034342 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:59.534325851 +0000 UTC m=+150.465385490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.135125 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:59 crc kubenswrapper[4678]: E1124 11:18:59.135487 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:59.635463204 +0000 UTC m=+150.566522843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.161313 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"339c6d901dec1fff134b18fd89fe4fc9fb5afc540bbd645b54925205efa48c50"} Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.161363 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ca15b624f95ac261963c434beeb5cb43e7a18434735852458ce8e34d9ac724a2"} Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.164384 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5ea5e5be378029f59e830e9bc60c3e61c37d26c09eb3e2cb8ebef2516361d245"} Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.164447 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"436fdb3d6400c68096bdd7f22861b374278a10e793f7e8f676ddafac2d8ce870"} Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.164693 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.167400 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4wkf5" event={"ID":"80ecc549-e277-418f-bf45-873acf3b8794","Type":"ContainerStarted","Data":"5e356bb7e79b3244a830001240a498f5a4a9dcbfc3e5d297aa54dabcf688d12a"} Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.169942 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"54c18ea0104dc27b54ae34e05316395e8f1b91b3a98a6b4c5b78a3ba0d872ffa"} Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.170010 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d1353013d952915c8d9b2c32e3e9328432207f9915953b0ca4294fba2470142a"} Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.172744 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" event={"ID":"4f36b66c-a595-4427-b08a-508b9bf5a27b","Type":"ContainerStarted","Data":"31ddc188f0d29719dcffdd0fe1c8ad51be17201da066e7d6aa28229e6ffbbb6a"} Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.179746 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.184276 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-76dnj" Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.209948 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dfcjk" Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.236527 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:59 crc kubenswrapper[4678]: E1124 11:18:59.241987 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:59.741963714 +0000 UTC m=+150.673023353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.338464 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:59 crc kubenswrapper[4678]: E1124 11:18:59.339167 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:18:59.839143792 +0000 UTC m=+150.770203421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.443316 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:59 crc kubenswrapper[4678]: E1124 11:18:59.443983 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:18:59.943961573 +0000 UTC m=+150.875021212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.544566 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:59 crc kubenswrapper[4678]: E1124 11:18:59.544890 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:19:00.04487176 +0000 UTC m=+150.975931399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.608997 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g8vp2" Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.646399 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:59 crc kubenswrapper[4678]: E1124 11:18:59.646924 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:19:00.146901809 +0000 UTC m=+151.077961448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.748449 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:59 crc kubenswrapper[4678]: E1124 11:18:59.748638 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:19:00.24861049 +0000 UTC m=+151.179670129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.748828 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:59 crc kubenswrapper[4678]: E1124 11:18:59.749211 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:19:00.249199687 +0000 UTC m=+151.180259326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.788593 4678 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.794823 4678 patch_prober.go:28] interesting pod/router-default-5444994796-qlttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:18:59 crc kubenswrapper[4678]: [-]has-synced failed: reason withheld Nov 24 11:18:59 crc kubenswrapper[4678]: [+]process-running ok Nov 24 11:18:59 crc kubenswrapper[4678]: healthz check failed Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.794920 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlttx" podUID="16c36416-1b0e-493e-b349-3dbd7c007e29" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.850459 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:59 crc kubenswrapper[4678]: E1124 11:18:59.850767 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:19:00.350725302 +0000 UTC m=+151.281784941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.850838 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:59 crc kubenswrapper[4678]: E1124 11:18:59.851347 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:19:00.351325309 +0000 UTC m=+151.282384948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.952476 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:18:59 crc kubenswrapper[4678]: E1124 11:18:59.952862 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:19:00.452814794 +0000 UTC m=+151.383874433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:18:59 crc kubenswrapper[4678]: I1124 11:18:59.952922 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:18:59 crc kubenswrapper[4678]: E1124 11:18:59.953318 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:19:00.453308728 +0000 UTC m=+151.384368357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.053591 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:19:00 crc kubenswrapper[4678]: E1124 11:19:00.053758 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 11:19:00.55372302 +0000 UTC m=+151.484782659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.054023 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:19:00 crc kubenswrapper[4678]: E1124 11:19:00.054413 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 11:19:00.55439699 +0000 UTC m=+151.485456629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcwcn" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.064232 4678 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-24T11:18:59.788630239Z","Handler":null,"Name":""} Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.073270 4678 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.073300 4678 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.155584 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.161910 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.183387 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" event={"ID":"4f36b66c-a595-4427-b08a-508b9bf5a27b","Type":"ContainerStarted","Data":"8269145f58d6c32bb30e0035553d3a6eaf295a900ee48ef41d4eeb3a7f5d007e"} Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.183457 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" event={"ID":"4f36b66c-a595-4427-b08a-508b9bf5a27b","Type":"ContainerStarted","Data":"dc1dac3484b5134b3142c05510559aee2c889e877a4b2f94cb1dcb276b113749"} Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.217591 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-ftpl8" podStartSLOduration=11.217562135 podStartE2EDuration="11.217562135s" podCreationTimestamp="2025-11-24 11:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:19:00.214428594 +0000 UTC m=+151.145488223" watchObservedRunningTime="2025-11-24 11:19:00.217562135 +0000 UTC m=+151.148621774" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.259020 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.263159 4678 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.263205 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.288366 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcwcn\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.296816 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.296880 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.351165 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.353101 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4sj65"] Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.354517 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.359479 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.389795 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4sj65"] Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.463615 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-utilities\") pod \"certified-operators-4sj65\" (UID: \"cdd6866d-2d7f-4bf4-aff4-461ed0c90347\") " pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.463948 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9gwl\" (UniqueName: \"kubernetes.io/projected/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-kube-api-access-r9gwl\") pod \"certified-operators-4sj65\" (UID: \"cdd6866d-2d7f-4bf4-aff4-461ed0c90347\") " pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.464090 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-catalog-content\") pod \"certified-operators-4sj65\" (UID: \"cdd6866d-2d7f-4bf4-aff4-461ed0c90347\") " pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.521200 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.526968 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-kl8pj" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.534207 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.550490 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nwmqj"] Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.552135 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.562502 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.568453 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-catalog-content\") pod \"certified-operators-4sj65\" (UID: \"cdd6866d-2d7f-4bf4-aff4-461ed0c90347\") " pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.568568 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-utilities\") pod \"certified-operators-4sj65\" (UID: \"cdd6866d-2d7f-4bf4-aff4-461ed0c90347\") " pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.568644 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9gwl\" (UniqueName: \"kubernetes.io/projected/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-kube-api-access-r9gwl\") pod \"certified-operators-4sj65\" (UID: \"cdd6866d-2d7f-4bf4-aff4-461ed0c90347\") " pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.569596 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-utilities\") pod \"certified-operators-4sj65\" (UID: \"cdd6866d-2d7f-4bf4-aff4-461ed0c90347\") " pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.569811 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-catalog-content\") pod \"certified-operators-4sj65\" (UID: \"cdd6866d-2d7f-4bf4-aff4-461ed0c90347\") " pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.571931 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nwmqj"] Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.607607 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9gwl\" (UniqueName: \"kubernetes.io/projected/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-kube-api-access-r9gwl\") pod \"certified-operators-4sj65\" (UID: \"cdd6866d-2d7f-4bf4-aff4-461ed0c90347\") " pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.669693 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c163752f-4564-4b60-b043-fe767dad40e4-catalog-content\") pod \"community-operators-nwmqj\" (UID: \"c163752f-4564-4b60-b043-fe767dad40e4\") " pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.669759 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4p8p\" (UniqueName: \"kubernetes.io/projected/c163752f-4564-4b60-b043-fe767dad40e4-kube-api-access-l4p8p\") pod \"community-operators-nwmqj\" (UID: \"c163752f-4564-4b60-b043-fe767dad40e4\") " pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.669837 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c163752f-4564-4b60-b043-fe767dad40e4-utilities\") pod \"community-operators-nwmqj\" (UID: \"c163752f-4564-4b60-b043-fe767dad40e4\") " pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.748333 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zkvr6"] Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.748899 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.757993 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.772518 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c163752f-4564-4b60-b043-fe767dad40e4-utilities\") pod \"community-operators-nwmqj\" (UID: \"c163752f-4564-4b60-b043-fe767dad40e4\") " pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.772581 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c163752f-4564-4b60-b043-fe767dad40e4-catalog-content\") pod \"community-operators-nwmqj\" (UID: \"c163752f-4564-4b60-b043-fe767dad40e4\") " pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.772615 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4p8p\" (UniqueName: \"kubernetes.io/projected/c163752f-4564-4b60-b043-fe767dad40e4-kube-api-access-l4p8p\") pod \"community-operators-nwmqj\" (UID: \"c163752f-4564-4b60-b043-fe767dad40e4\") " pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.776248 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c163752f-4564-4b60-b043-fe767dad40e4-utilities\") pod \"community-operators-nwmqj\" (UID: \"c163752f-4564-4b60-b043-fe767dad40e4\") " pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.779914 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c163752f-4564-4b60-b043-fe767dad40e4-catalog-content\") pod \"community-operators-nwmqj\" (UID: \"c163752f-4564-4b60-b043-fe767dad40e4\") " pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.783908 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zkvr6"] Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.806598 4678 patch_prober.go:28] interesting pod/router-default-5444994796-qlttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:19:00 crc kubenswrapper[4678]: [-]has-synced failed: reason withheld Nov 24 11:19:00 crc kubenswrapper[4678]: [+]process-running ok Nov 24 11:19:00 crc kubenswrapper[4678]: healthz check failed Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.806660 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlttx" podUID="16c36416-1b0e-493e-b349-3dbd7c007e29" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.807815 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4p8p\" (UniqueName: \"kubernetes.io/projected/c163752f-4564-4b60-b043-fe767dad40e4-kube-api-access-l4p8p\") pod \"community-operators-nwmqj\" (UID: \"c163752f-4564-4b60-b043-fe767dad40e4\") " pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.873630 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmtzv\" (UniqueName: \"kubernetes.io/projected/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-kube-api-access-tmtzv\") pod \"certified-operators-zkvr6\" (UID: \"224e7e28-2c19-4df5-bdab-6bd57cfb93ac\") " pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.873722 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-utilities\") pod \"certified-operators-zkvr6\" (UID: \"224e7e28-2c19-4df5-bdab-6bd57cfb93ac\") " pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.873751 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-catalog-content\") pod \"certified-operators-zkvr6\" (UID: \"224e7e28-2c19-4df5-bdab-6bd57cfb93ac\") " pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.893729 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vcwcn"] Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.898449 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.948346 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-654vm"] Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.949530 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.974580 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmtzv\" (UniqueName: \"kubernetes.io/projected/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-kube-api-access-tmtzv\") pod \"certified-operators-zkvr6\" (UID: \"224e7e28-2c19-4df5-bdab-6bd57cfb93ac\") " pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.974692 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-utilities\") pod \"certified-operators-zkvr6\" (UID: \"224e7e28-2c19-4df5-bdab-6bd57cfb93ac\") " pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.974732 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkcqw\" (UniqueName: \"kubernetes.io/projected/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-kube-api-access-mkcqw\") pod \"community-operators-654vm\" (UID: \"d55ea26a-6c29-4c66-a0db-2a9e94b21f29\") " pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.974762 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-catalog-content\") pod \"certified-operators-zkvr6\" (UID: \"224e7e28-2c19-4df5-bdab-6bd57cfb93ac\") " pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.974795 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-utilities\") pod \"community-operators-654vm\" (UID: \"d55ea26a-6c29-4c66-a0db-2a9e94b21f29\") " pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.974866 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-catalog-content\") pod \"community-operators-654vm\" (UID: \"d55ea26a-6c29-4c66-a0db-2a9e94b21f29\") " pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.976072 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-catalog-content\") pod \"certified-operators-zkvr6\" (UID: \"224e7e28-2c19-4df5-bdab-6bd57cfb93ac\") " pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.976482 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-utilities\") pod \"certified-operators-zkvr6\" (UID: \"224e7e28-2c19-4df5-bdab-6bd57cfb93ac\") " pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.984772 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-654vm"] Nov 24 11:19:00 crc kubenswrapper[4678]: I1124 11:19:00.999549 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmtzv\" (UniqueName: \"kubernetes.io/projected/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-kube-api-access-tmtzv\") pod \"certified-operators-zkvr6\" (UID: \"224e7e28-2c19-4df5-bdab-6bd57cfb93ac\") " pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.075847 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-catalog-content\") pod \"community-operators-654vm\" (UID: \"d55ea26a-6c29-4c66-a0db-2a9e94b21f29\") " pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.075926 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkcqw\" (UniqueName: \"kubernetes.io/projected/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-kube-api-access-mkcqw\") pod \"community-operators-654vm\" (UID: \"d55ea26a-6c29-4c66-a0db-2a9e94b21f29\") " pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.075959 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-utilities\") pod \"community-operators-654vm\" (UID: \"d55ea26a-6c29-4c66-a0db-2a9e94b21f29\") " pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.076438 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-utilities\") pod \"community-operators-654vm\" (UID: \"d55ea26a-6c29-4c66-a0db-2a9e94b21f29\") " pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.076693 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-catalog-content\") pod \"community-operators-654vm\" (UID: \"d55ea26a-6c29-4c66-a0db-2a9e94b21f29\") " pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.102894 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkcqw\" (UniqueName: \"kubernetes.io/projected/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-kube-api-access-mkcqw\") pod \"community-operators-654vm\" (UID: \"d55ea26a-6c29-4c66-a0db-2a9e94b21f29\") " pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.109215 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4sj65"] Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.119371 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.202530 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" event={"ID":"5c1ade65-11e8-4529-9885-7630968a4b98","Type":"ContainerStarted","Data":"15397c68f5c5398ea2e1cd72a4edfbeded64269ddda27c00d35284d9316275c0"} Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.202605 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" event={"ID":"5c1ade65-11e8-4529-9885-7630968a4b98","Type":"ContainerStarted","Data":"a87560927fc8a854653bcf63cba96657e23cc3e7ae34b788013651b7de0f51c3"} Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.203951 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.205718 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sj65" event={"ID":"cdd6866d-2d7f-4bf4-aff4-461ed0c90347","Type":"ContainerStarted","Data":"c6d43726205764634f0a8467a6e0d4a5e3ba62a03aa72fb641fb53215c4398e6"} Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.245144 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" podStartSLOduration=130.245118783 podStartE2EDuration="2m10.245118783s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:19:01.238460228 +0000 UTC m=+152.169519867" watchObservedRunningTime="2025-11-24 11:19:01.245118783 +0000 UTC m=+152.176178422" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.255500 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nwmqj"] Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.323834 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.436898 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zkvr6"] Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.702303 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-654vm"] Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.775782 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.777944 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.780087 4678 patch_prober.go:28] interesting pod/console-f9d7485db-chw9t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.780161 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-chw9t" podUID="38101ae8-9e21-4a62-b839-cc42e0562769" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.791854 4678 patch_prober.go:28] interesting pod/router-default-5444994796-qlttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:19:01 crc kubenswrapper[4678]: [-]has-synced failed: reason withheld Nov 24 11:19:01 crc kubenswrapper[4678]: [+]process-running ok Nov 24 11:19:01 crc kubenswrapper[4678]: healthz check failed Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.791955 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlttx" podUID="16c36416-1b0e-493e-b349-3dbd7c007e29" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.818830 4678 patch_prober.go:28] interesting pod/downloads-7954f5f757-zzwvq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.818895 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zzwvq" podUID="fef47a87-3f60-4ee1-a31e-b02583fc2819" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.818905 4678 patch_prober.go:28] interesting pod/downloads-7954f5f757-zzwvq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.818997 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zzwvq" podUID="fef47a87-3f60-4ee1-a31e-b02583fc2819" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 24 11:19:01 crc kubenswrapper[4678]: I1124 11:19:01.904133 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.124970 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.125848 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.128189 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.128799 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.176314 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.212507 4678 generic.go:334] "Generic (PLEG): container finished" podID="d55ea26a-6c29-4c66-a0db-2a9e94b21f29" containerID="3f2cbc61629dbca4e1bd5c2c119cfb29a8bacbdc05225508b7a48d4e4c2dfa0c" exitCode=0 Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.212579 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-654vm" event={"ID":"d55ea26a-6c29-4c66-a0db-2a9e94b21f29","Type":"ContainerDied","Data":"3f2cbc61629dbca4e1bd5c2c119cfb29a8bacbdc05225508b7a48d4e4c2dfa0c"} Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.212632 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-654vm" event={"ID":"d55ea26a-6c29-4c66-a0db-2a9e94b21f29","Type":"ContainerStarted","Data":"31312b161b0ab74f612f0436a27c15383c2d3a8238de76527d4abd3212f92963"} Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.214252 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.216205 4678 generic.go:334] "Generic (PLEG): container finished" podID="c163752f-4564-4b60-b043-fe767dad40e4" containerID="9587f1542c8d3834ba03f225e95bce24419756cc7ee645c852e88c22fb63e927" exitCode=0 Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.216274 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwmqj" event={"ID":"c163752f-4564-4b60-b043-fe767dad40e4","Type":"ContainerDied","Data":"9587f1542c8d3834ba03f225e95bce24419756cc7ee645c852e88c22fb63e927"} Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.216302 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwmqj" event={"ID":"c163752f-4564-4b60-b043-fe767dad40e4","Type":"ContainerStarted","Data":"8bf6e7cf1d78b141093e585b643d1a12cafb3f739f18d279287b53a21056d678"} Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.224157 4678 generic.go:334] "Generic (PLEG): container finished" podID="224e7e28-2c19-4df5-bdab-6bd57cfb93ac" containerID="a2058078fdba91b6b8c24a7c1842059a5d4d135c99b6294aa4183a9b8b4d616c" exitCode=0 Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.224235 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkvr6" event={"ID":"224e7e28-2c19-4df5-bdab-6bd57cfb93ac","Type":"ContainerDied","Data":"a2058078fdba91b6b8c24a7c1842059a5d4d135c99b6294aa4183a9b8b4d616c"} Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.224271 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkvr6" event={"ID":"224e7e28-2c19-4df5-bdab-6bd57cfb93ac","Type":"ContainerStarted","Data":"c83263fa70933cfd28bfa507d0fcc2af59679178c6107904ebfbe51b9d8af5eb"} Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.229413 4678 generic.go:334] "Generic (PLEG): container finished" podID="cdd6866d-2d7f-4bf4-aff4-461ed0c90347" containerID="306e30e2214a90c830a707f37aafa488aa8c54516c6844941415cbe983ebe0a4" exitCode=0 Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.229590 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sj65" event={"ID":"cdd6866d-2d7f-4bf4-aff4-461ed0c90347","Type":"ContainerDied","Data":"306e30e2214a90c830a707f37aafa488aa8c54516c6844941415cbe983ebe0a4"} Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.241209 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e34ca1d7-8034-48b6-95e0-38287d75504b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e34ca1d7-8034-48b6-95e0-38287d75504b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.241301 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e34ca1d7-8034-48b6-95e0-38287d75504b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e34ca1d7-8034-48b6-95e0-38287d75504b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.342862 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e34ca1d7-8034-48b6-95e0-38287d75504b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e34ca1d7-8034-48b6-95e0-38287d75504b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.342918 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e34ca1d7-8034-48b6-95e0-38287d75504b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e34ca1d7-8034-48b6-95e0-38287d75504b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.346780 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e34ca1d7-8034-48b6-95e0-38287d75504b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e34ca1d7-8034-48b6-95e0-38287d75504b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.348612 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pqwhj"] Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.351511 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.354976 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.367808 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pqwhj"] Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.386687 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e34ca1d7-8034-48b6-95e0-38287d75504b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e34ca1d7-8034-48b6-95e0-38287d75504b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.444689 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.445553 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/439a408b-a1ff-4517-b9b9-31902c9831da-catalog-content\") pod \"redhat-marketplace-pqwhj\" (UID: \"439a408b-a1ff-4517-b9b9-31902c9831da\") " pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.445748 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5nr9\" (UniqueName: \"kubernetes.io/projected/439a408b-a1ff-4517-b9b9-31902c9831da-kube-api-access-d5nr9\") pod \"redhat-marketplace-pqwhj\" (UID: \"439a408b-a1ff-4517-b9b9-31902c9831da\") " pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.445788 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/439a408b-a1ff-4517-b9b9-31902c9831da-utilities\") pod \"redhat-marketplace-pqwhj\" (UID: \"439a408b-a1ff-4517-b9b9-31902c9831da\") " pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.548864 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5nr9\" (UniqueName: \"kubernetes.io/projected/439a408b-a1ff-4517-b9b9-31902c9831da-kube-api-access-d5nr9\") pod \"redhat-marketplace-pqwhj\" (UID: \"439a408b-a1ff-4517-b9b9-31902c9831da\") " pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.548919 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/439a408b-a1ff-4517-b9b9-31902c9831da-utilities\") pod \"redhat-marketplace-pqwhj\" (UID: \"439a408b-a1ff-4517-b9b9-31902c9831da\") " pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.548983 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/439a408b-a1ff-4517-b9b9-31902c9831da-catalog-content\") pod \"redhat-marketplace-pqwhj\" (UID: \"439a408b-a1ff-4517-b9b9-31902c9831da\") " pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.549555 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/439a408b-a1ff-4517-b9b9-31902c9831da-catalog-content\") pod \"redhat-marketplace-pqwhj\" (UID: \"439a408b-a1ff-4517-b9b9-31902c9831da\") " pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.550203 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/439a408b-a1ff-4517-b9b9-31902c9831da-utilities\") pod \"redhat-marketplace-pqwhj\" (UID: \"439a408b-a1ff-4517-b9b9-31902c9831da\") " pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.591765 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5nr9\" (UniqueName: \"kubernetes.io/projected/439a408b-a1ff-4517-b9b9-31902c9831da-kube-api-access-d5nr9\") pod \"redhat-marketplace-pqwhj\" (UID: \"439a408b-a1ff-4517-b9b9-31902c9831da\") " pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.670791 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.731360 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.751567 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8kl7n"] Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.753167 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.774200 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kl7n"] Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.786464 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.791210 4678 patch_prober.go:28] interesting pod/router-default-5444994796-qlttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 11:19:02 crc kubenswrapper[4678]: [-]has-synced failed: reason withheld Nov 24 11:19:02 crc kubenswrapper[4678]: [+]process-running ok Nov 24 11:19:02 crc kubenswrapper[4678]: healthz check failed Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.791330 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlttx" podUID="16c36416-1b0e-493e-b349-3dbd7c007e29" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.864394 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-catalog-content\") pod \"redhat-marketplace-8kl7n\" (UID: \"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68\") " pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.864544 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spppq\" (UniqueName: \"kubernetes.io/projected/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-kube-api-access-spppq\") pod \"redhat-marketplace-8kl7n\" (UID: \"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68\") " pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.864566 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-utilities\") pod \"redhat-marketplace-8kl7n\" (UID: \"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68\") " pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.915952 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pqwhj"] Nov 24 11:19:02 crc kubenswrapper[4678]: W1124 11:19:02.938442 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod439a408b_a1ff_4517_b9b9_31902c9831da.slice/crio-a93cc8154d57205ab87bcb4db88ec262d4b4310a63cb4f76ae37624d01b4a035 WatchSource:0}: Error finding container a93cc8154d57205ab87bcb4db88ec262d4b4310a63cb4f76ae37624d01b4a035: Status 404 returned error can't find the container with id a93cc8154d57205ab87bcb4db88ec262d4b4310a63cb4f76ae37624d01b4a035 Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.969150 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spppq\" (UniqueName: \"kubernetes.io/projected/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-kube-api-access-spppq\") pod \"redhat-marketplace-8kl7n\" (UID: \"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68\") " pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.969746 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-utilities\") pod \"redhat-marketplace-8kl7n\" (UID: \"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68\") " pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.969825 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-catalog-content\") pod \"redhat-marketplace-8kl7n\" (UID: \"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68\") " pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.971239 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-catalog-content\") pod \"redhat-marketplace-8kl7n\" (UID: \"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68\") " pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.971607 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-utilities\") pod \"redhat-marketplace-8kl7n\" (UID: \"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68\") " pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:02 crc kubenswrapper[4678]: I1124 11:19:02.998439 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spppq\" (UniqueName: \"kubernetes.io/projected/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-kube-api-access-spppq\") pod \"redhat-marketplace-8kl7n\" (UID: \"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68\") " pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.089272 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.255133 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e34ca1d7-8034-48b6-95e0-38287d75504b","Type":"ContainerStarted","Data":"e72c98b5f89eb2e556fc7c5c0e9a3784adf1132a23908c2bd8520d2a5a4757c9"} Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.255493 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e34ca1d7-8034-48b6-95e0-38287d75504b","Type":"ContainerStarted","Data":"c4f27d38cf9f627bcd106e164fff41817544dcfaa0c89bfb44f6d55c2dbdf0fc"} Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.274212 4678 generic.go:334] "Generic (PLEG): container finished" podID="439a408b-a1ff-4517-b9b9-31902c9831da" containerID="7ee18d07b3a3e8a005180e7dbb22b088bbb2bac6b293b159157c998b597101ca" exitCode=0 Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.274397 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqwhj" event={"ID":"439a408b-a1ff-4517-b9b9-31902c9831da","Type":"ContainerDied","Data":"7ee18d07b3a3e8a005180e7dbb22b088bbb2bac6b293b159157c998b597101ca"} Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.274444 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqwhj" event={"ID":"439a408b-a1ff-4517-b9b9-31902c9831da","Type":"ContainerStarted","Data":"a93cc8154d57205ab87bcb4db88ec262d4b4310a63cb4f76ae37624d01b4a035"} Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.285609 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.285577771 podStartE2EDuration="1.285577771s" podCreationTimestamp="2025-11-24 11:19:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:19:03.279281188 +0000 UTC m=+154.210340847" watchObservedRunningTime="2025-11-24 11:19:03.285577771 +0000 UTC m=+154.216637410" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.350207 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-wm72k_974b621b-6635-4ca8-b53d-b15ae31b51b0/cluster-samples-operator/0.log" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.350262 4678 generic.go:334] "Generic (PLEG): container finished" podID="974b621b-6635-4ca8-b53d-b15ae31b51b0" containerID="7cb13d7a4bca6bdde24816603f2f57b7a3899337e198a7b0384b21aa2fd7a73f" exitCode=2 Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.350294 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k" event={"ID":"974b621b-6635-4ca8-b53d-b15ae31b51b0","Type":"ContainerDied","Data":"7cb13d7a4bca6bdde24816603f2f57b7a3899337e198a7b0384b21aa2fd7a73f"} Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.350831 4678 scope.go:117] "RemoveContainer" containerID="7cb13d7a4bca6bdde24816603f2f57b7a3899337e198a7b0384b21aa2fd7a73f" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.450525 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kl7n"] Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.567832 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9v4tq"] Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.571899 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.575177 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.591653 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9v4tq"] Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.692220 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ccc31ba-4304-484e-b824-42c6910e59cd-utilities\") pod \"redhat-operators-9v4tq\" (UID: \"5ccc31ba-4304-484e-b824-42c6910e59cd\") " pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.692362 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgmsc\" (UniqueName: \"kubernetes.io/projected/5ccc31ba-4304-484e-b824-42c6910e59cd-kube-api-access-wgmsc\") pod \"redhat-operators-9v4tq\" (UID: \"5ccc31ba-4304-484e-b824-42c6910e59cd\") " pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.692437 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ccc31ba-4304-484e-b824-42c6910e59cd-catalog-content\") pod \"redhat-operators-9v4tq\" (UID: \"5ccc31ba-4304-484e-b824-42c6910e59cd\") " pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.808594 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ccc31ba-4304-484e-b824-42c6910e59cd-utilities\") pod \"redhat-operators-9v4tq\" (UID: \"5ccc31ba-4304-484e-b824-42c6910e59cd\") " pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.808661 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgmsc\" (UniqueName: \"kubernetes.io/projected/5ccc31ba-4304-484e-b824-42c6910e59cd-kube-api-access-wgmsc\") pod \"redhat-operators-9v4tq\" (UID: \"5ccc31ba-4304-484e-b824-42c6910e59cd\") " pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.808734 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ccc31ba-4304-484e-b824-42c6910e59cd-catalog-content\") pod \"redhat-operators-9v4tq\" (UID: \"5ccc31ba-4304-484e-b824-42c6910e59cd\") " pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.809325 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ccc31ba-4304-484e-b824-42c6910e59cd-catalog-content\") pod \"redhat-operators-9v4tq\" (UID: \"5ccc31ba-4304-484e-b824-42c6910e59cd\") " pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.809542 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ccc31ba-4304-484e-b824-42c6910e59cd-utilities\") pod \"redhat-operators-9v4tq\" (UID: \"5ccc31ba-4304-484e-b824-42c6910e59cd\") " pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.819417 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.832118 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qlttx" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.862172 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgmsc\" (UniqueName: \"kubernetes.io/projected/5ccc31ba-4304-484e-b824-42c6910e59cd-kube-api-access-wgmsc\") pod \"redhat-operators-9v4tq\" (UID: \"5ccc31ba-4304-484e-b824-42c6910e59cd\") " pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.925092 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.970249 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bf6l7"] Nov 24 11:19:03 crc kubenswrapper[4678]: I1124 11:19:03.971420 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.053801 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bf6l7"] Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.115557 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde0a2ac-63b7-4301-9933-34fe08f499a9-utilities\") pod \"redhat-operators-bf6l7\" (UID: \"cde0a2ac-63b7-4301-9933-34fe08f499a9\") " pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.115853 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde0a2ac-63b7-4301-9933-34fe08f499a9-catalog-content\") pod \"redhat-operators-bf6l7\" (UID: \"cde0a2ac-63b7-4301-9933-34fe08f499a9\") " pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.115997 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtpht\" (UniqueName: \"kubernetes.io/projected/cde0a2ac-63b7-4301-9933-34fe08f499a9-kube-api-access-qtpht\") pod \"redhat-operators-bf6l7\" (UID: \"cde0a2ac-63b7-4301-9933-34fe08f499a9\") " pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.218003 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde0a2ac-63b7-4301-9933-34fe08f499a9-utilities\") pod \"redhat-operators-bf6l7\" (UID: \"cde0a2ac-63b7-4301-9933-34fe08f499a9\") " pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.218067 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde0a2ac-63b7-4301-9933-34fe08f499a9-catalog-content\") pod \"redhat-operators-bf6l7\" (UID: \"cde0a2ac-63b7-4301-9933-34fe08f499a9\") " pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.218146 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtpht\" (UniqueName: \"kubernetes.io/projected/cde0a2ac-63b7-4301-9933-34fe08f499a9-kube-api-access-qtpht\") pod \"redhat-operators-bf6l7\" (UID: \"cde0a2ac-63b7-4301-9933-34fe08f499a9\") " pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.219703 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde0a2ac-63b7-4301-9933-34fe08f499a9-utilities\") pod \"redhat-operators-bf6l7\" (UID: \"cde0a2ac-63b7-4301-9933-34fe08f499a9\") " pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.219953 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde0a2ac-63b7-4301-9933-34fe08f499a9-catalog-content\") pod \"redhat-operators-bf6l7\" (UID: \"cde0a2ac-63b7-4301-9933-34fe08f499a9\") " pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.251599 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtpht\" (UniqueName: \"kubernetes.io/projected/cde0a2ac-63b7-4301-9933-34fe08f499a9-kube-api-access-qtpht\") pod \"redhat-operators-bf6l7\" (UID: \"cde0a2ac-63b7-4301-9933-34fe08f499a9\") " pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.300124 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.393307 4678 generic.go:334] "Generic (PLEG): container finished" podID="4e3f3023-4b33-49cc-96d5-ba93bb9c0e68" containerID="648e4d41b52134c6d07aef6d2cf182541eb0c33edf6ad8912f75c14c8533767e" exitCode=0 Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.393817 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kl7n" event={"ID":"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68","Type":"ContainerDied","Data":"648e4d41b52134c6d07aef6d2cf182541eb0c33edf6ad8912f75c14c8533767e"} Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.394215 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kl7n" event={"ID":"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68","Type":"ContainerStarted","Data":"d9120dd74d46a6721bbf534a56b7e3dcca9cf01c53a742e9e1724791da01c4b0"} Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.423787 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-wm72k_974b621b-6635-4ca8-b53d-b15ae31b51b0/cluster-samples-operator/0.log" Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.423893 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wm72k" event={"ID":"974b621b-6635-4ca8-b53d-b15ae31b51b0","Type":"ContainerStarted","Data":"bbc3bfa0eb6a8ce8fbe1f1fb96474682f1552031bd56e3a2a59194dc10f5e622"} Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.456458 4678 generic.go:334] "Generic (PLEG): container finished" podID="e34ca1d7-8034-48b6-95e0-38287d75504b" containerID="e72c98b5f89eb2e556fc7c5c0e9a3784adf1132a23908c2bd8520d2a5a4757c9" exitCode=0 Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.456600 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e34ca1d7-8034-48b6-95e0-38287d75504b","Type":"ContainerDied","Data":"e72c98b5f89eb2e556fc7c5c0e9a3784adf1132a23908c2bd8520d2a5a4757c9"} Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.501214 4678 generic.go:334] "Generic (PLEG): container finished" podID="daea8216-5097-43f5-913a-eda16abaf508" containerID="795be823b1b1551d8ba9b667b4101d5059f40c8d7daa8be3adc7ead041418d4f" exitCode=0 Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.502099 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" event={"ID":"daea8216-5097-43f5-913a-eda16abaf508","Type":"ContainerDied","Data":"795be823b1b1551d8ba9b667b4101d5059f40c8d7daa8be3adc7ead041418d4f"} Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.557172 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9v4tq"] Nov 24 11:19:04 crc kubenswrapper[4678]: I1124 11:19:04.826828 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bf6l7"] Nov 24 11:19:05 crc kubenswrapper[4678]: I1124 11:19:05.522243 4678 generic.go:334] "Generic (PLEG): container finished" podID="cde0a2ac-63b7-4301-9933-34fe08f499a9" containerID="c93ce16da9aedc72635bb490caf0b77557d42119bb65dc995e2b4d298af3b9bb" exitCode=0 Nov 24 11:19:05 crc kubenswrapper[4678]: I1124 11:19:05.522342 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bf6l7" event={"ID":"cde0a2ac-63b7-4301-9933-34fe08f499a9","Type":"ContainerDied","Data":"c93ce16da9aedc72635bb490caf0b77557d42119bb65dc995e2b4d298af3b9bb"} Nov 24 11:19:05 crc kubenswrapper[4678]: I1124 11:19:05.522373 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bf6l7" event={"ID":"cde0a2ac-63b7-4301-9933-34fe08f499a9","Type":"ContainerStarted","Data":"2be2f3dd3896fe986566060842aad6c5fa80fb3220e2ec0410043d973e7a5823"} Nov 24 11:19:05 crc kubenswrapper[4678]: I1124 11:19:05.530354 4678 generic.go:334] "Generic (PLEG): container finished" podID="5ccc31ba-4304-484e-b824-42c6910e59cd" containerID="f1a77dff05214dacaf8020d5076ae251abf85a303c365b48596a0869349aaad6" exitCode=0 Nov 24 11:19:05 crc kubenswrapper[4678]: I1124 11:19:05.531659 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v4tq" event={"ID":"5ccc31ba-4304-484e-b824-42c6910e59cd","Type":"ContainerDied","Data":"f1a77dff05214dacaf8020d5076ae251abf85a303c365b48596a0869349aaad6"} Nov 24 11:19:05 crc kubenswrapper[4678]: I1124 11:19:05.531750 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v4tq" event={"ID":"5ccc31ba-4304-484e-b824-42c6910e59cd","Type":"ContainerStarted","Data":"933fecf2c0342ce2253b9e012aa20eb7bb04bbea35c7f76387dcb2d316f70cad"} Nov 24 11:19:05 crc kubenswrapper[4678]: I1124 11:19:05.948939 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:19:05 crc kubenswrapper[4678]: I1124 11:19:05.953807 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.067630 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e34ca1d7-8034-48b6-95e0-38287d75504b-kube-api-access\") pod \"e34ca1d7-8034-48b6-95e0-38287d75504b\" (UID: \"e34ca1d7-8034-48b6-95e0-38287d75504b\") " Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.067763 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/daea8216-5097-43f5-913a-eda16abaf508-config-volume\") pod \"daea8216-5097-43f5-913a-eda16abaf508\" (UID: \"daea8216-5097-43f5-913a-eda16abaf508\") " Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.067818 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e34ca1d7-8034-48b6-95e0-38287d75504b-kubelet-dir\") pod \"e34ca1d7-8034-48b6-95e0-38287d75504b\" (UID: \"e34ca1d7-8034-48b6-95e0-38287d75504b\") " Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.067876 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/daea8216-5097-43f5-913a-eda16abaf508-secret-volume\") pod \"daea8216-5097-43f5-913a-eda16abaf508\" (UID: \"daea8216-5097-43f5-913a-eda16abaf508\") " Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.067972 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8fgm\" (UniqueName: \"kubernetes.io/projected/daea8216-5097-43f5-913a-eda16abaf508-kube-api-access-q8fgm\") pod \"daea8216-5097-43f5-913a-eda16abaf508\" (UID: \"daea8216-5097-43f5-913a-eda16abaf508\") " Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.069725 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daea8216-5097-43f5-913a-eda16abaf508-config-volume" (OuterVolumeSpecName: "config-volume") pod "daea8216-5097-43f5-913a-eda16abaf508" (UID: "daea8216-5097-43f5-913a-eda16abaf508"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.069954 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e34ca1d7-8034-48b6-95e0-38287d75504b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e34ca1d7-8034-48b6-95e0-38287d75504b" (UID: "e34ca1d7-8034-48b6-95e0-38287d75504b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.085719 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e34ca1d7-8034-48b6-95e0-38287d75504b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e34ca1d7-8034-48b6-95e0-38287d75504b" (UID: "e34ca1d7-8034-48b6-95e0-38287d75504b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.087967 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daea8216-5097-43f5-913a-eda16abaf508-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "daea8216-5097-43f5-913a-eda16abaf508" (UID: "daea8216-5097-43f5-913a-eda16abaf508"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.095992 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daea8216-5097-43f5-913a-eda16abaf508-kube-api-access-q8fgm" (OuterVolumeSpecName: "kube-api-access-q8fgm") pod "daea8216-5097-43f5-913a-eda16abaf508" (UID: "daea8216-5097-43f5-913a-eda16abaf508"). InnerVolumeSpecName "kube-api-access-q8fgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.170592 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8fgm\" (UniqueName: \"kubernetes.io/projected/daea8216-5097-43f5-913a-eda16abaf508-kube-api-access-q8fgm\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.170642 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e34ca1d7-8034-48b6-95e0-38287d75504b-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.170653 4678 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/daea8216-5097-43f5-913a-eda16abaf508-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.170697 4678 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e34ca1d7-8034-48b6-95e0-38287d75504b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.170713 4678 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/daea8216-5097-43f5-913a-eda16abaf508-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.541131 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e34ca1d7-8034-48b6-95e0-38287d75504b","Type":"ContainerDied","Data":"c4f27d38cf9f627bcd106e164fff41817544dcfaa0c89bfb44f6d55c2dbdf0fc"} Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.541204 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4f27d38cf9f627bcd106e164fff41817544dcfaa0c89bfb44f6d55c2dbdf0fc" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.541288 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.586729 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" event={"ID":"daea8216-5097-43f5-913a-eda16abaf508","Type":"ContainerDied","Data":"938c4aba87a4c7e300879af406b1fb35b49d1adb6b8b878d75def08dc4915421"} Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.586775 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="938c4aba87a4c7e300879af406b1fb35b49d1adb6b8b878d75def08dc4915421" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.586833 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.648034 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 11:19:06 crc kubenswrapper[4678]: E1124 11:19:06.648434 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daea8216-5097-43f5-913a-eda16abaf508" containerName="collect-profiles" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.648449 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="daea8216-5097-43f5-913a-eda16abaf508" containerName="collect-profiles" Nov 24 11:19:06 crc kubenswrapper[4678]: E1124 11:19:06.648471 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34ca1d7-8034-48b6-95e0-38287d75504b" containerName="pruner" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.648477 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34ca1d7-8034-48b6-95e0-38287d75504b" containerName="pruner" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.648591 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="daea8216-5097-43f5-913a-eda16abaf508" containerName="collect-profiles" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.648599 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34ca1d7-8034-48b6-95e0-38287d75504b" containerName="pruner" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.649167 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.655147 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.655756 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.674264 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.780329 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c764a93d-9afc-48b6-aabc-5f46d7ee745d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c764a93d-9afc-48b6-aabc-5f46d7ee745d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.780445 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c764a93d-9afc-48b6-aabc-5f46d7ee745d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c764a93d-9afc-48b6-aabc-5f46d7ee745d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.882452 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c764a93d-9afc-48b6-aabc-5f46d7ee745d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c764a93d-9afc-48b6-aabc-5f46d7ee745d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.882571 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c764a93d-9afc-48b6-aabc-5f46d7ee745d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c764a93d-9afc-48b6-aabc-5f46d7ee745d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.882714 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c764a93d-9afc-48b6-aabc-5f46d7ee745d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c764a93d-9afc-48b6-aabc-5f46d7ee745d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.906587 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c764a93d-9afc-48b6-aabc-5f46d7ee745d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c764a93d-9afc-48b6-aabc-5f46d7ee745d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:19:06 crc kubenswrapper[4678]: I1124 11:19:06.977420 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:19:07 crc kubenswrapper[4678]: I1124 11:19:07.459000 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 11:19:07 crc kubenswrapper[4678]: I1124 11:19:07.626613 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:19:07 crc kubenswrapper[4678]: I1124 11:19:07.643916 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c764a93d-9afc-48b6-aabc-5f46d7ee745d","Type":"ContainerStarted","Data":"3b762ee4106b604cb04abc7a798c0e521ada3ccc549aa3ac383eaddc688690b6"} Nov 24 11:19:07 crc kubenswrapper[4678]: I1124 11:19:07.695354 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xpp8n" Nov 24 11:19:08 crc kubenswrapper[4678]: I1124 11:19:08.680425 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c764a93d-9afc-48b6-aabc-5f46d7ee745d","Type":"ContainerStarted","Data":"e922c3969bfb4ed95daf58cfdd3f28e9a7320305efb2430664dc5d43565874d6"} Nov 24 11:19:08 crc kubenswrapper[4678]: I1124 11:19:08.711642 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.7116180290000003 podStartE2EDuration="2.711618029s" podCreationTimestamp="2025-11-24 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:19:08.705219053 +0000 UTC m=+159.636278692" watchObservedRunningTime="2025-11-24 11:19:08.711618029 +0000 UTC m=+159.642677668" Nov 24 11:19:09 crc kubenswrapper[4678]: I1124 11:19:09.706567 4678 generic.go:334] "Generic (PLEG): container finished" podID="c764a93d-9afc-48b6-aabc-5f46d7ee745d" containerID="e922c3969bfb4ed95daf58cfdd3f28e9a7320305efb2430664dc5d43565874d6" exitCode=0 Nov 24 11:19:09 crc kubenswrapper[4678]: I1124 11:19:09.706697 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c764a93d-9afc-48b6-aabc-5f46d7ee745d","Type":"ContainerDied","Data":"e922c3969bfb4ed95daf58cfdd3f28e9a7320305efb2430664dc5d43565874d6"} Nov 24 11:19:11 crc kubenswrapper[4678]: I1124 11:19:11.840765 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-zzwvq" Nov 24 11:19:11 crc kubenswrapper[4678]: I1124 11:19:11.859841 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:19:11 crc kubenswrapper[4678]: I1124 11:19:11.868550 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:19:13 crc kubenswrapper[4678]: I1124 11:19:13.923637 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs\") pod \"network-metrics-daemon-pg6bk\" (UID: \"dca80848-6c0a-4946-980a-197e2ecfc898\") " pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:19:13 crc kubenswrapper[4678]: I1124 11:19:13.938582 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dca80848-6c0a-4946-980a-197e2ecfc898-metrics-certs\") pod \"network-metrics-daemon-pg6bk\" (UID: \"dca80848-6c0a-4946-980a-197e2ecfc898\") " pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:19:14 crc kubenswrapper[4678]: I1124 11:19:14.128258 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pg6bk" Nov 24 11:19:17 crc kubenswrapper[4678]: I1124 11:19:17.787751 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c764a93d-9afc-48b6-aabc-5f46d7ee745d","Type":"ContainerDied","Data":"3b762ee4106b604cb04abc7a798c0e521ada3ccc549aa3ac383eaddc688690b6"} Nov 24 11:19:17 crc kubenswrapper[4678]: I1124 11:19:17.787819 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b762ee4106b604cb04abc7a798c0e521ada3ccc549aa3ac383eaddc688690b6" Nov 24 11:19:17 crc kubenswrapper[4678]: I1124 11:19:17.831588 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:19:17 crc kubenswrapper[4678]: I1124 11:19:17.988542 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c764a93d-9afc-48b6-aabc-5f46d7ee745d-kubelet-dir\") pod \"c764a93d-9afc-48b6-aabc-5f46d7ee745d\" (UID: \"c764a93d-9afc-48b6-aabc-5f46d7ee745d\") " Nov 24 11:19:17 crc kubenswrapper[4678]: I1124 11:19:17.988725 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c764a93d-9afc-48b6-aabc-5f46d7ee745d-kube-api-access\") pod \"c764a93d-9afc-48b6-aabc-5f46d7ee745d\" (UID: \"c764a93d-9afc-48b6-aabc-5f46d7ee745d\") " Nov 24 11:19:17 crc kubenswrapper[4678]: I1124 11:19:17.988728 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c764a93d-9afc-48b6-aabc-5f46d7ee745d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c764a93d-9afc-48b6-aabc-5f46d7ee745d" (UID: "c764a93d-9afc-48b6-aabc-5f46d7ee745d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:19:17 crc kubenswrapper[4678]: I1124 11:19:17.989037 4678 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c764a93d-9afc-48b6-aabc-5f46d7ee745d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:17 crc kubenswrapper[4678]: I1124 11:19:17.993587 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c764a93d-9afc-48b6-aabc-5f46d7ee745d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c764a93d-9afc-48b6-aabc-5f46d7ee745d" (UID: "c764a93d-9afc-48b6-aabc-5f46d7ee745d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:19:18 crc kubenswrapper[4678]: I1124 11:19:18.090414 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c764a93d-9afc-48b6-aabc-5f46d7ee745d-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:18 crc kubenswrapper[4678]: I1124 11:19:18.793763 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 11:19:20 crc kubenswrapper[4678]: I1124 11:19:20.541552 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:19:30 crc kubenswrapper[4678]: I1124 11:19:30.298712 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:19:30 crc kubenswrapper[4678]: I1124 11:19:30.301351 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:19:30 crc kubenswrapper[4678]: I1124 11:19:30.643167 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pg6bk"] Nov 24 11:19:30 crc kubenswrapper[4678]: W1124 11:19:30.650585 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddca80848_6c0a_4946_980a_197e2ecfc898.slice/crio-019ace145df4b762c6ba5053d71d96b1764aaef9f8adad04cf150b5e58dabd26 WatchSource:0}: Error finding container 019ace145df4b762c6ba5053d71d96b1764aaef9f8adad04cf150b5e58dabd26: Status 404 returned error can't find the container with id 019ace145df4b762c6ba5053d71d96b1764aaef9f8adad04cf150b5e58dabd26 Nov 24 11:19:30 crc kubenswrapper[4678]: I1124 11:19:30.868867 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" event={"ID":"dca80848-6c0a-4946-980a-197e2ecfc898","Type":"ContainerStarted","Data":"019ace145df4b762c6ba5053d71d96b1764aaef9f8adad04cf150b5e58dabd26"} Nov 24 11:19:30 crc kubenswrapper[4678]: I1124 11:19:30.871641 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bf6l7" event={"ID":"cde0a2ac-63b7-4301-9933-34fe08f499a9","Type":"ContainerStarted","Data":"8024413c96d53b254da3eb7189e88646e7ff41950d829a038389acdb04a77d91"} Nov 24 11:19:30 crc kubenswrapper[4678]: I1124 11:19:30.874750 4678 generic.go:334] "Generic (PLEG): container finished" podID="4e3f3023-4b33-49cc-96d5-ba93bb9c0e68" containerID="162cdfb20fce74ebc5b3eb3d1ef008a0ddb1b8f2cded83dd8c1f6888a099c764" exitCode=0 Nov 24 11:19:30 crc kubenswrapper[4678]: I1124 11:19:30.874839 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kl7n" event={"ID":"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68","Type":"ContainerDied","Data":"162cdfb20fce74ebc5b3eb3d1ef008a0ddb1b8f2cded83dd8c1f6888a099c764"} Nov 24 11:19:30 crc kubenswrapper[4678]: I1124 11:19:30.883747 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-654vm" event={"ID":"d55ea26a-6c29-4c66-a0db-2a9e94b21f29","Type":"ContainerStarted","Data":"a890c24e845a3334ec012f2c7af08446640ae8a848b7c7690ea2292bbd7313b7"} Nov 24 11:19:30 crc kubenswrapper[4678]: I1124 11:19:30.886192 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwmqj" event={"ID":"c163752f-4564-4b60-b043-fe767dad40e4","Type":"ContainerStarted","Data":"05d12a35dc660692b80b0217bf58f2a58dba893ae46c7960f020403eb12c15f7"} Nov 24 11:19:30 crc kubenswrapper[4678]: I1124 11:19:30.890212 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v4tq" event={"ID":"5ccc31ba-4304-484e-b824-42c6910e59cd","Type":"ContainerStarted","Data":"64170346bd4885bee54b1c59dfd0390a5795a1a222f95fe06ee452eba1e86ee7"} Nov 24 11:19:30 crc kubenswrapper[4678]: I1124 11:19:30.894144 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkvr6" event={"ID":"224e7e28-2c19-4df5-bdab-6bd57cfb93ac","Type":"ContainerStarted","Data":"714f5f2c731b486724b5a7d136c454c71b3c395f99246abc8d3e81342e915b22"} Nov 24 11:19:30 crc kubenswrapper[4678]: I1124 11:19:30.896013 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sj65" event={"ID":"cdd6866d-2d7f-4bf4-aff4-461ed0c90347","Type":"ContainerStarted","Data":"23573a7669cd6ec661b03fc67828d3d8e10b049fbbcdf7729276e4c475a381bd"} Nov 24 11:19:30 crc kubenswrapper[4678]: I1124 11:19:30.897396 4678 generic.go:334] "Generic (PLEG): container finished" podID="439a408b-a1ff-4517-b9b9-31902c9831da" containerID="9874fc0349044a0622b2b75ce587b8a8ddd7385735dd3fd0829b4cc03ccdb04e" exitCode=0 Nov 24 11:19:30 crc kubenswrapper[4678]: I1124 11:19:30.897506 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqwhj" event={"ID":"439a408b-a1ff-4517-b9b9-31902c9831da","Type":"ContainerDied","Data":"9874fc0349044a0622b2b75ce587b8a8ddd7385735dd3fd0829b4cc03ccdb04e"} Nov 24 11:19:31 crc kubenswrapper[4678]: I1124 11:19:31.907323 4678 generic.go:334] "Generic (PLEG): container finished" podID="d55ea26a-6c29-4c66-a0db-2a9e94b21f29" containerID="a890c24e845a3334ec012f2c7af08446640ae8a848b7c7690ea2292bbd7313b7" exitCode=0 Nov 24 11:19:31 crc kubenswrapper[4678]: I1124 11:19:31.907388 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-654vm" event={"ID":"d55ea26a-6c29-4c66-a0db-2a9e94b21f29","Type":"ContainerDied","Data":"a890c24e845a3334ec012f2c7af08446640ae8a848b7c7690ea2292bbd7313b7"} Nov 24 11:19:31 crc kubenswrapper[4678]: I1124 11:19:31.911474 4678 generic.go:334] "Generic (PLEG): container finished" podID="c163752f-4564-4b60-b043-fe767dad40e4" containerID="05d12a35dc660692b80b0217bf58f2a58dba893ae46c7960f020403eb12c15f7" exitCode=0 Nov 24 11:19:31 crc kubenswrapper[4678]: I1124 11:19:31.912350 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwmqj" event={"ID":"c163752f-4564-4b60-b043-fe767dad40e4","Type":"ContainerDied","Data":"05d12a35dc660692b80b0217bf58f2a58dba893ae46c7960f020403eb12c15f7"} Nov 24 11:19:31 crc kubenswrapper[4678]: I1124 11:19:31.913913 4678 generic.go:334] "Generic (PLEG): container finished" podID="5ccc31ba-4304-484e-b824-42c6910e59cd" containerID="64170346bd4885bee54b1c59dfd0390a5795a1a222f95fe06ee452eba1e86ee7" exitCode=0 Nov 24 11:19:31 crc kubenswrapper[4678]: I1124 11:19:31.913985 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v4tq" event={"ID":"5ccc31ba-4304-484e-b824-42c6910e59cd","Type":"ContainerDied","Data":"64170346bd4885bee54b1c59dfd0390a5795a1a222f95fe06ee452eba1e86ee7"} Nov 24 11:19:31 crc kubenswrapper[4678]: I1124 11:19:31.916103 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" event={"ID":"dca80848-6c0a-4946-980a-197e2ecfc898","Type":"ContainerStarted","Data":"4c0098ec0808ffd778568f81f582976041d3416766be19d1cf24ffdd506b1ec3"} Nov 24 11:19:31 crc kubenswrapper[4678]: I1124 11:19:31.916134 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pg6bk" event={"ID":"dca80848-6c0a-4946-980a-197e2ecfc898","Type":"ContainerStarted","Data":"c6331b65810cb4dc0ad749db28f77a6373a83e9ffcee41cb6e3a4d12eff1f9f9"} Nov 24 11:19:31 crc kubenswrapper[4678]: I1124 11:19:31.918275 4678 generic.go:334] "Generic (PLEG): container finished" podID="224e7e28-2c19-4df5-bdab-6bd57cfb93ac" containerID="714f5f2c731b486724b5a7d136c454c71b3c395f99246abc8d3e81342e915b22" exitCode=0 Nov 24 11:19:31 crc kubenswrapper[4678]: I1124 11:19:31.918337 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkvr6" event={"ID":"224e7e28-2c19-4df5-bdab-6bd57cfb93ac","Type":"ContainerDied","Data":"714f5f2c731b486724b5a7d136c454c71b3c395f99246abc8d3e81342e915b22"} Nov 24 11:19:31 crc kubenswrapper[4678]: I1124 11:19:31.920922 4678 generic.go:334] "Generic (PLEG): container finished" podID="cdd6866d-2d7f-4bf4-aff4-461ed0c90347" containerID="23573a7669cd6ec661b03fc67828d3d8e10b049fbbcdf7729276e4c475a381bd" exitCode=0 Nov 24 11:19:31 crc kubenswrapper[4678]: I1124 11:19:31.920983 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sj65" event={"ID":"cdd6866d-2d7f-4bf4-aff4-461ed0c90347","Type":"ContainerDied","Data":"23573a7669cd6ec661b03fc67828d3d8e10b049fbbcdf7729276e4c475a381bd"} Nov 24 11:19:31 crc kubenswrapper[4678]: I1124 11:19:31.924526 4678 generic.go:334] "Generic (PLEG): container finished" podID="cde0a2ac-63b7-4301-9933-34fe08f499a9" containerID="8024413c96d53b254da3eb7189e88646e7ff41950d829a038389acdb04a77d91" exitCode=0 Nov 24 11:19:31 crc kubenswrapper[4678]: I1124 11:19:31.924606 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bf6l7" event={"ID":"cde0a2ac-63b7-4301-9933-34fe08f499a9","Type":"ContainerDied","Data":"8024413c96d53b254da3eb7189e88646e7ff41950d829a038389acdb04a77d91"} Nov 24 11:19:31 crc kubenswrapper[4678]: I1124 11:19:31.997403 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-pg6bk" podStartSLOduration=160.997382701 podStartE2EDuration="2m40.997382701s" podCreationTimestamp="2025-11-24 11:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:19:31.991115438 +0000 UTC m=+182.922175087" watchObservedRunningTime="2025-11-24 11:19:31.997382701 +0000 UTC m=+182.928442340" Nov 24 11:19:32 crc kubenswrapper[4678]: I1124 11:19:32.646943 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mgcsk" Nov 24 11:19:32 crc kubenswrapper[4678]: I1124 11:19:32.933360 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqwhj" event={"ID":"439a408b-a1ff-4517-b9b9-31902c9831da","Type":"ContainerStarted","Data":"0c80a9d6d861b2153d90e8bf131db00231466cbc3be4995f125036b16e9401c1"} Nov 24 11:19:34 crc kubenswrapper[4678]: I1124 11:19:34.946833 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kl7n" event={"ID":"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68","Type":"ContainerStarted","Data":"5616d7f2d0b663db9da038dee293fe44241234799847592495a34bbb55279935"} Nov 24 11:19:34 crc kubenswrapper[4678]: I1124 11:19:34.967656 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pqwhj" podStartSLOduration=3.776915688 podStartE2EDuration="32.967635573s" podCreationTimestamp="2025-11-24 11:19:02 +0000 UTC" firstStartedPulling="2025-11-24 11:19:03.284897532 +0000 UTC m=+154.215957171" lastFinishedPulling="2025-11-24 11:19:32.475617407 +0000 UTC m=+183.406677056" observedRunningTime="2025-11-24 11:19:33.971161392 +0000 UTC m=+184.902221111" watchObservedRunningTime="2025-11-24 11:19:34.967635573 +0000 UTC m=+185.898695212" Nov 24 11:19:34 crc kubenswrapper[4678]: I1124 11:19:34.968238 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8kl7n" podStartSLOduration=3.310825566 podStartE2EDuration="32.9682319s" podCreationTimestamp="2025-11-24 11:19:02 +0000 UTC" firstStartedPulling="2025-11-24 11:19:04.402238952 +0000 UTC m=+155.333298581" lastFinishedPulling="2025-11-24 11:19:34.059645276 +0000 UTC m=+184.990704915" observedRunningTime="2025-11-24 11:19:34.9668634 +0000 UTC m=+185.897923059" watchObservedRunningTime="2025-11-24 11:19:34.9682319 +0000 UTC m=+185.899291539" Nov 24 11:19:35 crc kubenswrapper[4678]: I1124 11:19:35.956613 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-654vm" event={"ID":"d55ea26a-6c29-4c66-a0db-2a9e94b21f29","Type":"ContainerStarted","Data":"056eca2aecb68a0fcc6dbd487d704ad8f3044f452a9dd43e7fe29d75270f877f"} Nov 24 11:19:35 crc kubenswrapper[4678]: I1124 11:19:35.960394 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwmqj" event={"ID":"c163752f-4564-4b60-b043-fe767dad40e4","Type":"ContainerStarted","Data":"310c242b9df178c44a32f5f06bd62c3ece42d7d0b7861e1ae3942d50a44be652"} Nov 24 11:19:35 crc kubenswrapper[4678]: I1124 11:19:35.964688 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v4tq" event={"ID":"5ccc31ba-4304-484e-b824-42c6910e59cd","Type":"ContainerStarted","Data":"9f8b3772222103f29d5b8085784f6360e1c876b0aae000ba6414fe448a22e1a9"} Nov 24 11:19:35 crc kubenswrapper[4678]: I1124 11:19:35.967808 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkvr6" event={"ID":"224e7e28-2c19-4df5-bdab-6bd57cfb93ac","Type":"ContainerStarted","Data":"a07004b6eb7fca9812778997604bcfea094efd145b6c93286a82f72354283e2d"} Nov 24 11:19:35 crc kubenswrapper[4678]: I1124 11:19:35.970295 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sj65" event={"ID":"cdd6866d-2d7f-4bf4-aff4-461ed0c90347","Type":"ContainerStarted","Data":"7439eef84d57ded7389c9b3c99e71413a5e135a19b6f2c22e52b5bc231de91e1"} Nov 24 11:19:35 crc kubenswrapper[4678]: I1124 11:19:35.972739 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bf6l7" event={"ID":"cde0a2ac-63b7-4301-9933-34fe08f499a9","Type":"ContainerStarted","Data":"5f346fdae84255eb79823669b9ff2d9e6b08ca4136da9dfe1425ea31067130a1"} Nov 24 11:19:36 crc kubenswrapper[4678]: I1124 11:19:36.009995 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9v4tq" podStartSLOduration=3.131051059 podStartE2EDuration="33.009961242s" podCreationTimestamp="2025-11-24 11:19:03 +0000 UTC" firstStartedPulling="2025-11-24 11:19:05.534354093 +0000 UTC m=+156.465413732" lastFinishedPulling="2025-11-24 11:19:35.413264286 +0000 UTC m=+186.344323915" observedRunningTime="2025-11-24 11:19:36.0068024 +0000 UTC m=+186.937862059" watchObservedRunningTime="2025-11-24 11:19:36.009961242 +0000 UTC m=+186.941020901" Nov 24 11:19:36 crc kubenswrapper[4678]: I1124 11:19:36.010255 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-654vm" podStartSLOduration=2.874566358 podStartE2EDuration="36.01024889s" podCreationTimestamp="2025-11-24 11:19:00 +0000 UTC" firstStartedPulling="2025-11-24 11:19:02.213949236 +0000 UTC m=+153.145008875" lastFinishedPulling="2025-11-24 11:19:35.349631778 +0000 UTC m=+186.280691407" observedRunningTime="2025-11-24 11:19:35.986792736 +0000 UTC m=+186.917852375" watchObservedRunningTime="2025-11-24 11:19:36.01024889 +0000 UTC m=+186.941308529" Nov 24 11:19:36 crc kubenswrapper[4678]: I1124 11:19:36.062949 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zkvr6" podStartSLOduration=3.016419882 podStartE2EDuration="36.062927209s" podCreationTimestamp="2025-11-24 11:19:00 +0000 UTC" firstStartedPulling="2025-11-24 11:19:02.226569545 +0000 UTC m=+153.157629184" lastFinishedPulling="2025-11-24 11:19:35.273076862 +0000 UTC m=+186.204136511" observedRunningTime="2025-11-24 11:19:36.042121321 +0000 UTC m=+186.973180960" watchObservedRunningTime="2025-11-24 11:19:36.062927209 +0000 UTC m=+186.993986838" Nov 24 11:19:36 crc kubenswrapper[4678]: I1124 11:19:36.065748 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4sj65" podStartSLOduration=3.072978973 podStartE2EDuration="36.065740991s" podCreationTimestamp="2025-11-24 11:19:00 +0000 UTC" firstStartedPulling="2025-11-24 11:19:02.232293702 +0000 UTC m=+153.163353341" lastFinishedPulling="2025-11-24 11:19:35.2250557 +0000 UTC m=+186.156115359" observedRunningTime="2025-11-24 11:19:36.061070755 +0000 UTC m=+186.992130394" watchObservedRunningTime="2025-11-24 11:19:36.065740991 +0000 UTC m=+186.996800630" Nov 24 11:19:36 crc kubenswrapper[4678]: I1124 11:19:36.087014 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bf6l7" podStartSLOduration=3.4348143589999998 podStartE2EDuration="33.086996891s" podCreationTimestamp="2025-11-24 11:19:03 +0000 UTC" firstStartedPulling="2025-11-24 11:19:05.535796895 +0000 UTC m=+156.466856534" lastFinishedPulling="2025-11-24 11:19:35.187979427 +0000 UTC m=+186.119039066" observedRunningTime="2025-11-24 11:19:36.084808988 +0000 UTC m=+187.015868627" watchObservedRunningTime="2025-11-24 11:19:36.086996891 +0000 UTC m=+187.018056530" Nov 24 11:19:36 crc kubenswrapper[4678]: I1124 11:19:36.120451 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nwmqj" podStartSLOduration=2.8941619210000002 podStartE2EDuration="36.120422118s" podCreationTimestamp="2025-11-24 11:19:00 +0000 UTC" firstStartedPulling="2025-11-24 11:19:02.22092606 +0000 UTC m=+153.151985699" lastFinishedPulling="2025-11-24 11:19:35.447186257 +0000 UTC m=+186.378245896" observedRunningTime="2025-11-24 11:19:36.1164143 +0000 UTC m=+187.047473939" watchObservedRunningTime="2025-11-24 11:19:36.120422118 +0000 UTC m=+187.051481757" Nov 24 11:19:37 crc kubenswrapper[4678]: I1124 11:19:37.621181 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 11:19:40 crc kubenswrapper[4678]: I1124 11:19:40.516866 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tf9mj"] Nov 24 11:19:40 crc kubenswrapper[4678]: I1124 11:19:40.750283 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:19:40 crc kubenswrapper[4678]: I1124 11:19:40.750342 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:19:40 crc kubenswrapper[4678]: I1124 11:19:40.900262 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:19:40 crc kubenswrapper[4678]: I1124 11:19:40.900328 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:19:41 crc kubenswrapper[4678]: I1124 11:19:41.119637 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:41 crc kubenswrapper[4678]: I1124 11:19:41.119715 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:41 crc kubenswrapper[4678]: I1124 11:19:41.148160 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:19:41 crc kubenswrapper[4678]: I1124 11:19:41.149778 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:19:41 crc kubenswrapper[4678]: I1124 11:19:41.166327 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:41 crc kubenswrapper[4678]: I1124 11:19:41.204518 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:19:41 crc kubenswrapper[4678]: I1124 11:19:41.205336 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:19:41 crc kubenswrapper[4678]: I1124 11:19:41.325091 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:41 crc kubenswrapper[4678]: I1124 11:19:41.325245 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:41 crc kubenswrapper[4678]: I1124 11:19:41.387187 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:42 crc kubenswrapper[4678]: I1124 11:19:42.079886 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:42 crc kubenswrapper[4678]: I1124 11:19:42.091218 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:42 crc kubenswrapper[4678]: I1124 11:19:42.556587 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-654vm"] Nov 24 11:19:42 crc kubenswrapper[4678]: I1124 11:19:42.670917 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:19:42 crc kubenswrapper[4678]: I1124 11:19:42.671044 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:19:42 crc kubenswrapper[4678]: I1124 11:19:42.725664 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:19:43 crc kubenswrapper[4678]: I1124 11:19:43.062347 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:19:43 crc kubenswrapper[4678]: I1124 11:19:43.090991 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:43 crc kubenswrapper[4678]: I1124 11:19:43.091045 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:43 crc kubenswrapper[4678]: I1124 11:19:43.133980 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:43 crc kubenswrapper[4678]: I1124 11:19:43.553779 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zkvr6"] Nov 24 11:19:43 crc kubenswrapper[4678]: I1124 11:19:43.926815 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:19:43 crc kubenswrapper[4678]: I1124 11:19:43.926887 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:19:43 crc kubenswrapper[4678]: I1124 11:19:43.975836 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.026412 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-654vm" podUID="d55ea26a-6c29-4c66-a0db-2a9e94b21f29" containerName="registry-server" containerID="cri-o://056eca2aecb68a0fcc6dbd487d704ad8f3044f452a9dd43e7fe29d75270f877f" gracePeriod=2 Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.027050 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zkvr6" podUID="224e7e28-2c19-4df5-bdab-6bd57cfb93ac" containerName="registry-server" containerID="cri-o://a07004b6eb7fca9812778997604bcfea094efd145b6c93286a82f72354283e2d" gracePeriod=2 Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.071373 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.085097 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.301099 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.301195 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.373318 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.483801 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.492882 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.652490 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-utilities\") pod \"d55ea26a-6c29-4c66-a0db-2a9e94b21f29\" (UID: \"d55ea26a-6c29-4c66-a0db-2a9e94b21f29\") " Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.652556 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-catalog-content\") pod \"224e7e28-2c19-4df5-bdab-6bd57cfb93ac\" (UID: \"224e7e28-2c19-4df5-bdab-6bd57cfb93ac\") " Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.652601 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmtzv\" (UniqueName: \"kubernetes.io/projected/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-kube-api-access-tmtzv\") pod \"224e7e28-2c19-4df5-bdab-6bd57cfb93ac\" (UID: \"224e7e28-2c19-4df5-bdab-6bd57cfb93ac\") " Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.652726 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkcqw\" (UniqueName: \"kubernetes.io/projected/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-kube-api-access-mkcqw\") pod \"d55ea26a-6c29-4c66-a0db-2a9e94b21f29\" (UID: \"d55ea26a-6c29-4c66-a0db-2a9e94b21f29\") " Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.652755 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-catalog-content\") pod \"d55ea26a-6c29-4c66-a0db-2a9e94b21f29\" (UID: \"d55ea26a-6c29-4c66-a0db-2a9e94b21f29\") " Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.652819 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-utilities\") pod \"224e7e28-2c19-4df5-bdab-6bd57cfb93ac\" (UID: \"224e7e28-2c19-4df5-bdab-6bd57cfb93ac\") " Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.653689 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-utilities" (OuterVolumeSpecName: "utilities") pod "d55ea26a-6c29-4c66-a0db-2a9e94b21f29" (UID: "d55ea26a-6c29-4c66-a0db-2a9e94b21f29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.653861 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-utilities" (OuterVolumeSpecName: "utilities") pod "224e7e28-2c19-4df5-bdab-6bd57cfb93ac" (UID: "224e7e28-2c19-4df5-bdab-6bd57cfb93ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.660771 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-kube-api-access-mkcqw" (OuterVolumeSpecName: "kube-api-access-mkcqw") pod "d55ea26a-6c29-4c66-a0db-2a9e94b21f29" (UID: "d55ea26a-6c29-4c66-a0db-2a9e94b21f29"). InnerVolumeSpecName "kube-api-access-mkcqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.660840 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-kube-api-access-tmtzv" (OuterVolumeSpecName: "kube-api-access-tmtzv") pod "224e7e28-2c19-4df5-bdab-6bd57cfb93ac" (UID: "224e7e28-2c19-4df5-bdab-6bd57cfb93ac"). InnerVolumeSpecName "kube-api-access-tmtzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.719760 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "224e7e28-2c19-4df5-bdab-6bd57cfb93ac" (UID: "224e7e28-2c19-4df5-bdab-6bd57cfb93ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.732778 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d55ea26a-6c29-4c66-a0db-2a9e94b21f29" (UID: "d55ea26a-6c29-4c66-a0db-2a9e94b21f29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.754852 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.754908 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.754937 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.754951 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmtzv\" (UniqueName: \"kubernetes.io/projected/224e7e28-2c19-4df5-bdab-6bd57cfb93ac-kube-api-access-tmtzv\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.754963 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkcqw\" (UniqueName: \"kubernetes.io/projected/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-kube-api-access-mkcqw\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:44 crc kubenswrapper[4678]: I1124 11:19:44.754973 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55ea26a-6c29-4c66-a0db-2a9e94b21f29-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.034248 4678 generic.go:334] "Generic (PLEG): container finished" podID="d55ea26a-6c29-4c66-a0db-2a9e94b21f29" containerID="056eca2aecb68a0fcc6dbd487d704ad8f3044f452a9dd43e7fe29d75270f877f" exitCode=0 Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.034341 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-654vm" event={"ID":"d55ea26a-6c29-4c66-a0db-2a9e94b21f29","Type":"ContainerDied","Data":"056eca2aecb68a0fcc6dbd487d704ad8f3044f452a9dd43e7fe29d75270f877f"} Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.034379 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-654vm" event={"ID":"d55ea26a-6c29-4c66-a0db-2a9e94b21f29","Type":"ContainerDied","Data":"31312b161b0ab74f612f0436a27c15383c2d3a8238de76527d4abd3212f92963"} Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.034406 4678 scope.go:117] "RemoveContainer" containerID="056eca2aecb68a0fcc6dbd487d704ad8f3044f452a9dd43e7fe29d75270f877f" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.034440 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-654vm" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.041342 4678 generic.go:334] "Generic (PLEG): container finished" podID="224e7e28-2c19-4df5-bdab-6bd57cfb93ac" containerID="a07004b6eb7fca9812778997604bcfea094efd145b6c93286a82f72354283e2d" exitCode=0 Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.041971 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkvr6" event={"ID":"224e7e28-2c19-4df5-bdab-6bd57cfb93ac","Type":"ContainerDied","Data":"a07004b6eb7fca9812778997604bcfea094efd145b6c93286a82f72354283e2d"} Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.042073 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkvr6" event={"ID":"224e7e28-2c19-4df5-bdab-6bd57cfb93ac","Type":"ContainerDied","Data":"c83263fa70933cfd28bfa507d0fcc2af59679178c6107904ebfbe51b9d8af5eb"} Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.042134 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkvr6" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.053892 4678 scope.go:117] "RemoveContainer" containerID="a890c24e845a3334ec012f2c7af08446640ae8a848b7c7690ea2292bbd7313b7" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.088608 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-654vm"] Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.092180 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-654vm"] Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.111690 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zkvr6"] Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.120832 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zkvr6"] Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.128260 4678 scope.go:117] "RemoveContainer" containerID="3f2cbc61629dbca4e1bd5c2c119cfb29a8bacbdc05225508b7a48d4e4c2dfa0c" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.135215 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.147720 4678 scope.go:117] "RemoveContainer" containerID="056eca2aecb68a0fcc6dbd487d704ad8f3044f452a9dd43e7fe29d75270f877f" Nov 24 11:19:45 crc kubenswrapper[4678]: E1124 11:19:45.148362 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"056eca2aecb68a0fcc6dbd487d704ad8f3044f452a9dd43e7fe29d75270f877f\": container with ID starting with 056eca2aecb68a0fcc6dbd487d704ad8f3044f452a9dd43e7fe29d75270f877f not found: ID does not exist" containerID="056eca2aecb68a0fcc6dbd487d704ad8f3044f452a9dd43e7fe29d75270f877f" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.148412 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"056eca2aecb68a0fcc6dbd487d704ad8f3044f452a9dd43e7fe29d75270f877f"} err="failed to get container status \"056eca2aecb68a0fcc6dbd487d704ad8f3044f452a9dd43e7fe29d75270f877f\": rpc error: code = NotFound desc = could not find container \"056eca2aecb68a0fcc6dbd487d704ad8f3044f452a9dd43e7fe29d75270f877f\": container with ID starting with 056eca2aecb68a0fcc6dbd487d704ad8f3044f452a9dd43e7fe29d75270f877f not found: ID does not exist" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.148464 4678 scope.go:117] "RemoveContainer" containerID="a890c24e845a3334ec012f2c7af08446640ae8a848b7c7690ea2292bbd7313b7" Nov 24 11:19:45 crc kubenswrapper[4678]: E1124 11:19:45.149177 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a890c24e845a3334ec012f2c7af08446640ae8a848b7c7690ea2292bbd7313b7\": container with ID starting with a890c24e845a3334ec012f2c7af08446640ae8a848b7c7690ea2292bbd7313b7 not found: ID does not exist" containerID="a890c24e845a3334ec012f2c7af08446640ae8a848b7c7690ea2292bbd7313b7" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.149224 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a890c24e845a3334ec012f2c7af08446640ae8a848b7c7690ea2292bbd7313b7"} err="failed to get container status \"a890c24e845a3334ec012f2c7af08446640ae8a848b7c7690ea2292bbd7313b7\": rpc error: code = NotFound desc = could not find container \"a890c24e845a3334ec012f2c7af08446640ae8a848b7c7690ea2292bbd7313b7\": container with ID starting with a890c24e845a3334ec012f2c7af08446640ae8a848b7c7690ea2292bbd7313b7 not found: ID does not exist" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.149258 4678 scope.go:117] "RemoveContainer" containerID="3f2cbc61629dbca4e1bd5c2c119cfb29a8bacbdc05225508b7a48d4e4c2dfa0c" Nov 24 11:19:45 crc kubenswrapper[4678]: E1124 11:19:45.149580 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f2cbc61629dbca4e1bd5c2c119cfb29a8bacbdc05225508b7a48d4e4c2dfa0c\": container with ID starting with 3f2cbc61629dbca4e1bd5c2c119cfb29a8bacbdc05225508b7a48d4e4c2dfa0c not found: ID does not exist" containerID="3f2cbc61629dbca4e1bd5c2c119cfb29a8bacbdc05225508b7a48d4e4c2dfa0c" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.149602 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2cbc61629dbca4e1bd5c2c119cfb29a8bacbdc05225508b7a48d4e4c2dfa0c"} err="failed to get container status \"3f2cbc61629dbca4e1bd5c2c119cfb29a8bacbdc05225508b7a48d4e4c2dfa0c\": rpc error: code = NotFound desc = could not find container \"3f2cbc61629dbca4e1bd5c2c119cfb29a8bacbdc05225508b7a48d4e4c2dfa0c\": container with ID starting with 3f2cbc61629dbca4e1bd5c2c119cfb29a8bacbdc05225508b7a48d4e4c2dfa0c not found: ID does not exist" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.149615 4678 scope.go:117] "RemoveContainer" containerID="a07004b6eb7fca9812778997604bcfea094efd145b6c93286a82f72354283e2d" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.207342 4678 scope.go:117] "RemoveContainer" containerID="714f5f2c731b486724b5a7d136c454c71b3c395f99246abc8d3e81342e915b22" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.232732 4678 scope.go:117] "RemoveContainer" containerID="a2058078fdba91b6b8c24a7c1842059a5d4d135c99b6294aa4183a9b8b4d616c" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.269491 4678 scope.go:117] "RemoveContainer" containerID="a07004b6eb7fca9812778997604bcfea094efd145b6c93286a82f72354283e2d" Nov 24 11:19:45 crc kubenswrapper[4678]: E1124 11:19:45.270237 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07004b6eb7fca9812778997604bcfea094efd145b6c93286a82f72354283e2d\": container with ID starting with a07004b6eb7fca9812778997604bcfea094efd145b6c93286a82f72354283e2d not found: ID does not exist" containerID="a07004b6eb7fca9812778997604bcfea094efd145b6c93286a82f72354283e2d" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.270291 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07004b6eb7fca9812778997604bcfea094efd145b6c93286a82f72354283e2d"} err="failed to get container status \"a07004b6eb7fca9812778997604bcfea094efd145b6c93286a82f72354283e2d\": rpc error: code = NotFound desc = could not find container \"a07004b6eb7fca9812778997604bcfea094efd145b6c93286a82f72354283e2d\": container with ID starting with a07004b6eb7fca9812778997604bcfea094efd145b6c93286a82f72354283e2d not found: ID does not exist" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.270321 4678 scope.go:117] "RemoveContainer" containerID="714f5f2c731b486724b5a7d136c454c71b3c395f99246abc8d3e81342e915b22" Nov 24 11:19:45 crc kubenswrapper[4678]: E1124 11:19:45.271101 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"714f5f2c731b486724b5a7d136c454c71b3c395f99246abc8d3e81342e915b22\": container with ID starting with 714f5f2c731b486724b5a7d136c454c71b3c395f99246abc8d3e81342e915b22 not found: ID does not exist" containerID="714f5f2c731b486724b5a7d136c454c71b3c395f99246abc8d3e81342e915b22" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.271139 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"714f5f2c731b486724b5a7d136c454c71b3c395f99246abc8d3e81342e915b22"} err="failed to get container status \"714f5f2c731b486724b5a7d136c454c71b3c395f99246abc8d3e81342e915b22\": rpc error: code = NotFound desc = could not find container \"714f5f2c731b486724b5a7d136c454c71b3c395f99246abc8d3e81342e915b22\": container with ID starting with 714f5f2c731b486724b5a7d136c454c71b3c395f99246abc8d3e81342e915b22 not found: ID does not exist" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.271154 4678 scope.go:117] "RemoveContainer" containerID="a2058078fdba91b6b8c24a7c1842059a5d4d135c99b6294aa4183a9b8b4d616c" Nov 24 11:19:45 crc kubenswrapper[4678]: E1124 11:19:45.272545 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2058078fdba91b6b8c24a7c1842059a5d4d135c99b6294aa4183a9b8b4d616c\": container with ID starting with a2058078fdba91b6b8c24a7c1842059a5d4d135c99b6294aa4183a9b8b4d616c not found: ID does not exist" containerID="a2058078fdba91b6b8c24a7c1842059a5d4d135c99b6294aa4183a9b8b4d616c" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.272585 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2058078fdba91b6b8c24a7c1842059a5d4d135c99b6294aa4183a9b8b4d616c"} err="failed to get container status \"a2058078fdba91b6b8c24a7c1842059a5d4d135c99b6294aa4183a9b8b4d616c\": rpc error: code = NotFound desc = could not find container \"a2058078fdba91b6b8c24a7c1842059a5d4d135c99b6294aa4183a9b8b4d616c\": container with ID starting with a2058078fdba91b6b8c24a7c1842059a5d4d135c99b6294aa4183a9b8b4d616c not found: ID does not exist" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.904732 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="224e7e28-2c19-4df5-bdab-6bd57cfb93ac" path="/var/lib/kubelet/pods/224e7e28-2c19-4df5-bdab-6bd57cfb93ac/volumes" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.906837 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d55ea26a-6c29-4c66-a0db-2a9e94b21f29" path="/var/lib/kubelet/pods/d55ea26a-6c29-4c66-a0db-2a9e94b21f29/volumes" Nov 24 11:19:45 crc kubenswrapper[4678]: I1124 11:19:45.954039 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kl7n"] Nov 24 11:19:46 crc kubenswrapper[4678]: I1124 11:19:46.050320 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8kl7n" podUID="4e3f3023-4b33-49cc-96d5-ba93bb9c0e68" containerName="registry-server" containerID="cri-o://5616d7f2d0b663db9da038dee293fe44241234799847592495a34bbb55279935" gracePeriod=2 Nov 24 11:19:46 crc kubenswrapper[4678]: I1124 11:19:46.387331 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:46 crc kubenswrapper[4678]: I1124 11:19:46.479878 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spppq\" (UniqueName: \"kubernetes.io/projected/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-kube-api-access-spppq\") pod \"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68\" (UID: \"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68\") " Nov 24 11:19:46 crc kubenswrapper[4678]: I1124 11:19:46.479976 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-catalog-content\") pod \"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68\" (UID: \"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68\") " Nov 24 11:19:46 crc kubenswrapper[4678]: I1124 11:19:46.480018 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-utilities\") pod \"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68\" (UID: \"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68\") " Nov 24 11:19:46 crc kubenswrapper[4678]: I1124 11:19:46.480926 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-utilities" (OuterVolumeSpecName: "utilities") pod "4e3f3023-4b33-49cc-96d5-ba93bb9c0e68" (UID: "4e3f3023-4b33-49cc-96d5-ba93bb9c0e68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:19:46 crc kubenswrapper[4678]: I1124 11:19:46.486013 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-kube-api-access-spppq" (OuterVolumeSpecName: "kube-api-access-spppq") pod "4e3f3023-4b33-49cc-96d5-ba93bb9c0e68" (UID: "4e3f3023-4b33-49cc-96d5-ba93bb9c0e68"). InnerVolumeSpecName "kube-api-access-spppq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:19:46 crc kubenswrapper[4678]: I1124 11:19:46.503179 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e3f3023-4b33-49cc-96d5-ba93bb9c0e68" (UID: "4e3f3023-4b33-49cc-96d5-ba93bb9c0e68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:19:46 crc kubenswrapper[4678]: I1124 11:19:46.581231 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spppq\" (UniqueName: \"kubernetes.io/projected/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-kube-api-access-spppq\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:46 crc kubenswrapper[4678]: I1124 11:19:46.581271 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:46 crc kubenswrapper[4678]: I1124 11:19:46.581284 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:47 crc kubenswrapper[4678]: I1124 11:19:47.058207 4678 generic.go:334] "Generic (PLEG): container finished" podID="4e3f3023-4b33-49cc-96d5-ba93bb9c0e68" containerID="5616d7f2d0b663db9da038dee293fe44241234799847592495a34bbb55279935" exitCode=0 Nov 24 11:19:47 crc kubenswrapper[4678]: I1124 11:19:47.058262 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kl7n" event={"ID":"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68","Type":"ContainerDied","Data":"5616d7f2d0b663db9da038dee293fe44241234799847592495a34bbb55279935"} Nov 24 11:19:47 crc kubenswrapper[4678]: I1124 11:19:47.058305 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kl7n" event={"ID":"4e3f3023-4b33-49cc-96d5-ba93bb9c0e68","Type":"ContainerDied","Data":"d9120dd74d46a6721bbf534a56b7e3dcca9cf01c53a742e9e1724791da01c4b0"} Nov 24 11:19:47 crc kubenswrapper[4678]: I1124 11:19:47.058329 4678 scope.go:117] "RemoveContainer" containerID="5616d7f2d0b663db9da038dee293fe44241234799847592495a34bbb55279935" Nov 24 11:19:47 crc kubenswrapper[4678]: I1124 11:19:47.058387 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8kl7n" Nov 24 11:19:47 crc kubenswrapper[4678]: I1124 11:19:47.080792 4678 scope.go:117] "RemoveContainer" containerID="162cdfb20fce74ebc5b3eb3d1ef008a0ddb1b8f2cded83dd8c1f6888a099c764" Nov 24 11:19:47 crc kubenswrapper[4678]: I1124 11:19:47.087436 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kl7n"] Nov 24 11:19:47 crc kubenswrapper[4678]: I1124 11:19:47.090070 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kl7n"] Nov 24 11:19:47 crc kubenswrapper[4678]: I1124 11:19:47.127229 4678 scope.go:117] "RemoveContainer" containerID="648e4d41b52134c6d07aef6d2cf182541eb0c33edf6ad8912f75c14c8533767e" Nov 24 11:19:47 crc kubenswrapper[4678]: I1124 11:19:47.146384 4678 scope.go:117] "RemoveContainer" containerID="5616d7f2d0b663db9da038dee293fe44241234799847592495a34bbb55279935" Nov 24 11:19:47 crc kubenswrapper[4678]: E1124 11:19:47.147030 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5616d7f2d0b663db9da038dee293fe44241234799847592495a34bbb55279935\": container with ID starting with 5616d7f2d0b663db9da038dee293fe44241234799847592495a34bbb55279935 not found: ID does not exist" containerID="5616d7f2d0b663db9da038dee293fe44241234799847592495a34bbb55279935" Nov 24 11:19:47 crc kubenswrapper[4678]: I1124 11:19:47.147089 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5616d7f2d0b663db9da038dee293fe44241234799847592495a34bbb55279935"} err="failed to get container status \"5616d7f2d0b663db9da038dee293fe44241234799847592495a34bbb55279935\": rpc error: code = NotFound desc = could not find container \"5616d7f2d0b663db9da038dee293fe44241234799847592495a34bbb55279935\": container with ID starting with 5616d7f2d0b663db9da038dee293fe44241234799847592495a34bbb55279935 not found: ID does not exist" Nov 24 11:19:47 crc kubenswrapper[4678]: I1124 11:19:47.147132 4678 scope.go:117] "RemoveContainer" containerID="162cdfb20fce74ebc5b3eb3d1ef008a0ddb1b8f2cded83dd8c1f6888a099c764" Nov 24 11:19:47 crc kubenswrapper[4678]: E1124 11:19:47.147916 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"162cdfb20fce74ebc5b3eb3d1ef008a0ddb1b8f2cded83dd8c1f6888a099c764\": container with ID starting with 162cdfb20fce74ebc5b3eb3d1ef008a0ddb1b8f2cded83dd8c1f6888a099c764 not found: ID does not exist" containerID="162cdfb20fce74ebc5b3eb3d1ef008a0ddb1b8f2cded83dd8c1f6888a099c764" Nov 24 11:19:47 crc kubenswrapper[4678]: I1124 11:19:47.147953 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"162cdfb20fce74ebc5b3eb3d1ef008a0ddb1b8f2cded83dd8c1f6888a099c764"} err="failed to get container status \"162cdfb20fce74ebc5b3eb3d1ef008a0ddb1b8f2cded83dd8c1f6888a099c764\": rpc error: code = NotFound desc = could not find container \"162cdfb20fce74ebc5b3eb3d1ef008a0ddb1b8f2cded83dd8c1f6888a099c764\": container with ID starting with 162cdfb20fce74ebc5b3eb3d1ef008a0ddb1b8f2cded83dd8c1f6888a099c764 not found: ID does not exist" Nov 24 11:19:47 crc kubenswrapper[4678]: I1124 11:19:47.147979 4678 scope.go:117] "RemoveContainer" containerID="648e4d41b52134c6d07aef6d2cf182541eb0c33edf6ad8912f75c14c8533767e" Nov 24 11:19:47 crc kubenswrapper[4678]: E1124 11:19:47.148418 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"648e4d41b52134c6d07aef6d2cf182541eb0c33edf6ad8912f75c14c8533767e\": container with ID starting with 648e4d41b52134c6d07aef6d2cf182541eb0c33edf6ad8912f75c14c8533767e not found: ID does not exist" containerID="648e4d41b52134c6d07aef6d2cf182541eb0c33edf6ad8912f75c14c8533767e" Nov 24 11:19:47 crc kubenswrapper[4678]: I1124 11:19:47.148482 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"648e4d41b52134c6d07aef6d2cf182541eb0c33edf6ad8912f75c14c8533767e"} err="failed to get container status \"648e4d41b52134c6d07aef6d2cf182541eb0c33edf6ad8912f75c14c8533767e\": rpc error: code = NotFound desc = could not find container \"648e4d41b52134c6d07aef6d2cf182541eb0c33edf6ad8912f75c14c8533767e\": container with ID starting with 648e4d41b52134c6d07aef6d2cf182541eb0c33edf6ad8912f75c14c8533767e not found: ID does not exist" Nov 24 11:19:47 crc kubenswrapper[4678]: I1124 11:19:47.906629 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e3f3023-4b33-49cc-96d5-ba93bb9c0e68" path="/var/lib/kubelet/pods/4e3f3023-4b33-49cc-96d5-ba93bb9c0e68/volumes" Nov 24 11:19:48 crc kubenswrapper[4678]: I1124 11:19:48.351028 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bf6l7"] Nov 24 11:19:48 crc kubenswrapper[4678]: I1124 11:19:48.351289 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bf6l7" podUID="cde0a2ac-63b7-4301-9933-34fe08f499a9" containerName="registry-server" containerID="cri-o://5f346fdae84255eb79823669b9ff2d9e6b08ca4136da9dfe1425ea31067130a1" gracePeriod=2 Nov 24 11:19:48 crc kubenswrapper[4678]: I1124 11:19:48.794319 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:48 crc kubenswrapper[4678]: I1124 11:19:48.911552 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde0a2ac-63b7-4301-9933-34fe08f499a9-utilities\") pod \"cde0a2ac-63b7-4301-9933-34fe08f499a9\" (UID: \"cde0a2ac-63b7-4301-9933-34fe08f499a9\") " Nov 24 11:19:48 crc kubenswrapper[4678]: I1124 11:19:48.911643 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde0a2ac-63b7-4301-9933-34fe08f499a9-catalog-content\") pod \"cde0a2ac-63b7-4301-9933-34fe08f499a9\" (UID: \"cde0a2ac-63b7-4301-9933-34fe08f499a9\") " Nov 24 11:19:48 crc kubenswrapper[4678]: I1124 11:19:48.911778 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtpht\" (UniqueName: \"kubernetes.io/projected/cde0a2ac-63b7-4301-9933-34fe08f499a9-kube-api-access-qtpht\") pod \"cde0a2ac-63b7-4301-9933-34fe08f499a9\" (UID: \"cde0a2ac-63b7-4301-9933-34fe08f499a9\") " Nov 24 11:19:48 crc kubenswrapper[4678]: I1124 11:19:48.912984 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cde0a2ac-63b7-4301-9933-34fe08f499a9-utilities" (OuterVolumeSpecName: "utilities") pod "cde0a2ac-63b7-4301-9933-34fe08f499a9" (UID: "cde0a2ac-63b7-4301-9933-34fe08f499a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:19:48 crc kubenswrapper[4678]: I1124 11:19:48.919455 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde0a2ac-63b7-4301-9933-34fe08f499a9-kube-api-access-qtpht" (OuterVolumeSpecName: "kube-api-access-qtpht") pod "cde0a2ac-63b7-4301-9933-34fe08f499a9" (UID: "cde0a2ac-63b7-4301-9933-34fe08f499a9"). InnerVolumeSpecName "kube-api-access-qtpht". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.013158 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtpht\" (UniqueName: \"kubernetes.io/projected/cde0a2ac-63b7-4301-9933-34fe08f499a9-kube-api-access-qtpht\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.013194 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde0a2ac-63b7-4301-9933-34fe08f499a9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.017166 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cde0a2ac-63b7-4301-9933-34fe08f499a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cde0a2ac-63b7-4301-9933-34fe08f499a9" (UID: "cde0a2ac-63b7-4301-9933-34fe08f499a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.072912 4678 generic.go:334] "Generic (PLEG): container finished" podID="cde0a2ac-63b7-4301-9933-34fe08f499a9" containerID="5f346fdae84255eb79823669b9ff2d9e6b08ca4136da9dfe1425ea31067130a1" exitCode=0 Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.072958 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bf6l7" event={"ID":"cde0a2ac-63b7-4301-9933-34fe08f499a9","Type":"ContainerDied","Data":"5f346fdae84255eb79823669b9ff2d9e6b08ca4136da9dfe1425ea31067130a1"} Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.072989 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bf6l7" event={"ID":"cde0a2ac-63b7-4301-9933-34fe08f499a9","Type":"ContainerDied","Data":"2be2f3dd3896fe986566060842aad6c5fa80fb3220e2ec0410043d973e7a5823"} Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.073012 4678 scope.go:117] "RemoveContainer" containerID="5f346fdae84255eb79823669b9ff2d9e6b08ca4136da9dfe1425ea31067130a1" Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.073049 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bf6l7" Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.092828 4678 scope.go:117] "RemoveContainer" containerID="8024413c96d53b254da3eb7189e88646e7ff41950d829a038389acdb04a77d91" Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.110606 4678 scope.go:117] "RemoveContainer" containerID="c93ce16da9aedc72635bb490caf0b77557d42119bb65dc995e2b4d298af3b9bb" Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.114181 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde0a2ac-63b7-4301-9933-34fe08f499a9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.125148 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bf6l7"] Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.131537 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bf6l7"] Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.141340 4678 scope.go:117] "RemoveContainer" containerID="5f346fdae84255eb79823669b9ff2d9e6b08ca4136da9dfe1425ea31067130a1" Nov 24 11:19:49 crc kubenswrapper[4678]: E1124 11:19:49.141940 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f346fdae84255eb79823669b9ff2d9e6b08ca4136da9dfe1425ea31067130a1\": container with ID starting with 5f346fdae84255eb79823669b9ff2d9e6b08ca4136da9dfe1425ea31067130a1 not found: ID does not exist" containerID="5f346fdae84255eb79823669b9ff2d9e6b08ca4136da9dfe1425ea31067130a1" Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.141999 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f346fdae84255eb79823669b9ff2d9e6b08ca4136da9dfe1425ea31067130a1"} err="failed to get container status \"5f346fdae84255eb79823669b9ff2d9e6b08ca4136da9dfe1425ea31067130a1\": rpc error: code = NotFound desc = could not find container \"5f346fdae84255eb79823669b9ff2d9e6b08ca4136da9dfe1425ea31067130a1\": container with ID starting with 5f346fdae84255eb79823669b9ff2d9e6b08ca4136da9dfe1425ea31067130a1 not found: ID does not exist" Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.142033 4678 scope.go:117] "RemoveContainer" containerID="8024413c96d53b254da3eb7189e88646e7ff41950d829a038389acdb04a77d91" Nov 24 11:19:49 crc kubenswrapper[4678]: E1124 11:19:49.142451 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8024413c96d53b254da3eb7189e88646e7ff41950d829a038389acdb04a77d91\": container with ID starting with 8024413c96d53b254da3eb7189e88646e7ff41950d829a038389acdb04a77d91 not found: ID does not exist" containerID="8024413c96d53b254da3eb7189e88646e7ff41950d829a038389acdb04a77d91" Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.142514 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8024413c96d53b254da3eb7189e88646e7ff41950d829a038389acdb04a77d91"} err="failed to get container status \"8024413c96d53b254da3eb7189e88646e7ff41950d829a038389acdb04a77d91\": rpc error: code = NotFound desc = could not find container \"8024413c96d53b254da3eb7189e88646e7ff41950d829a038389acdb04a77d91\": container with ID starting with 8024413c96d53b254da3eb7189e88646e7ff41950d829a038389acdb04a77d91 not found: ID does not exist" Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.142568 4678 scope.go:117] "RemoveContainer" containerID="c93ce16da9aedc72635bb490caf0b77557d42119bb65dc995e2b4d298af3b9bb" Nov 24 11:19:49 crc kubenswrapper[4678]: E1124 11:19:49.143181 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c93ce16da9aedc72635bb490caf0b77557d42119bb65dc995e2b4d298af3b9bb\": container with ID starting with c93ce16da9aedc72635bb490caf0b77557d42119bb65dc995e2b4d298af3b9bb not found: ID does not exist" containerID="c93ce16da9aedc72635bb490caf0b77557d42119bb65dc995e2b4d298af3b9bb" Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.143208 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c93ce16da9aedc72635bb490caf0b77557d42119bb65dc995e2b4d298af3b9bb"} err="failed to get container status \"c93ce16da9aedc72635bb490caf0b77557d42119bb65dc995e2b4d298af3b9bb\": rpc error: code = NotFound desc = could not find container \"c93ce16da9aedc72635bb490caf0b77557d42119bb65dc995e2b4d298af3b9bb\": container with ID starting with c93ce16da9aedc72635bb490caf0b77557d42119bb65dc995e2b4d298af3b9bb not found: ID does not exist" Nov 24 11:19:49 crc kubenswrapper[4678]: I1124 11:19:49.905082 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cde0a2ac-63b7-4301-9933-34fe08f499a9" path="/var/lib/kubelet/pods/cde0a2ac-63b7-4301-9933-34fe08f499a9/volumes" Nov 24 11:19:55 crc kubenswrapper[4678]: I1124 11:19:55.969752 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9sfxt"] Nov 24 11:19:55 crc kubenswrapper[4678]: I1124 11:19:55.970477 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" podUID="dd1948d5-d633-4a92-a800-776add7a0894" containerName="controller-manager" containerID="cri-o://518bbfc59ceb7601c55c1078931afc8f91780d6822b520315ae5f34489a9c673" gracePeriod=30 Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.076711 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h"] Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.077028 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" podUID="b1550d14-7d6b-43b9-bbbd-268b0274028a" containerName="route-controller-manager" containerID="cri-o://19403cd2fe755a390f8dc144980b0e1b3d5ff8d2ed6ea2ed4f32f56f58716992" gracePeriod=30 Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.137734 4678 generic.go:334] "Generic (PLEG): container finished" podID="dd1948d5-d633-4a92-a800-776add7a0894" containerID="518bbfc59ceb7601c55c1078931afc8f91780d6822b520315ae5f34489a9c673" exitCode=0 Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.137786 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" event={"ID":"dd1948d5-d633-4a92-a800-776add7a0894","Type":"ContainerDied","Data":"518bbfc59ceb7601c55c1078931afc8f91780d6822b520315ae5f34489a9c673"} Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.356578 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.420440 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1948d5-d633-4a92-a800-776add7a0894-serving-cert\") pod \"dd1948d5-d633-4a92-a800-776add7a0894\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.421454 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-proxy-ca-bundles\") pod \"dd1948d5-d633-4a92-a800-776add7a0894\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.421618 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-config\") pod \"dd1948d5-d633-4a92-a800-776add7a0894\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.421667 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsxtw\" (UniqueName: \"kubernetes.io/projected/dd1948d5-d633-4a92-a800-776add7a0894-kube-api-access-fsxtw\") pod \"dd1948d5-d633-4a92-a800-776add7a0894\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.421727 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-client-ca\") pod \"dd1948d5-d633-4a92-a800-776add7a0894\" (UID: \"dd1948d5-d633-4a92-a800-776add7a0894\") " Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.422359 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-client-ca" (OuterVolumeSpecName: "client-ca") pod "dd1948d5-d633-4a92-a800-776add7a0894" (UID: "dd1948d5-d633-4a92-a800-776add7a0894"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.422401 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-config" (OuterVolumeSpecName: "config") pod "dd1948d5-d633-4a92-a800-776add7a0894" (UID: "dd1948d5-d633-4a92-a800-776add7a0894"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.422567 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dd1948d5-d633-4a92-a800-776add7a0894" (UID: "dd1948d5-d633-4a92-a800-776add7a0894"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.427381 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.427411 4678 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.427425 4678 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1948d5-d633-4a92-a800-776add7a0894-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.428449 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd1948d5-d633-4a92-a800-776add7a0894-kube-api-access-fsxtw" (OuterVolumeSpecName: "kube-api-access-fsxtw") pod "dd1948d5-d633-4a92-a800-776add7a0894" (UID: "dd1948d5-d633-4a92-a800-776add7a0894"). InnerVolumeSpecName "kube-api-access-fsxtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.429070 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1948d5-d633-4a92-a800-776add7a0894-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dd1948d5-d633-4a92-a800-776add7a0894" (UID: "dd1948d5-d633-4a92-a800-776add7a0894"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.431817 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.528327 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bdkj\" (UniqueName: \"kubernetes.io/projected/b1550d14-7d6b-43b9-bbbd-268b0274028a-kube-api-access-4bdkj\") pod \"b1550d14-7d6b-43b9-bbbd-268b0274028a\" (UID: \"b1550d14-7d6b-43b9-bbbd-268b0274028a\") " Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.528481 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1550d14-7d6b-43b9-bbbd-268b0274028a-config\") pod \"b1550d14-7d6b-43b9-bbbd-268b0274028a\" (UID: \"b1550d14-7d6b-43b9-bbbd-268b0274028a\") " Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.528530 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1550d14-7d6b-43b9-bbbd-268b0274028a-serving-cert\") pod \"b1550d14-7d6b-43b9-bbbd-268b0274028a\" (UID: \"b1550d14-7d6b-43b9-bbbd-268b0274028a\") " Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.528670 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1550d14-7d6b-43b9-bbbd-268b0274028a-client-ca\") pod \"b1550d14-7d6b-43b9-bbbd-268b0274028a\" (UID: \"b1550d14-7d6b-43b9-bbbd-268b0274028a\") " Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.528999 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsxtw\" (UniqueName: \"kubernetes.io/projected/dd1948d5-d633-4a92-a800-776add7a0894-kube-api-access-fsxtw\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.529022 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1948d5-d633-4a92-a800-776add7a0894-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.529368 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1550d14-7d6b-43b9-bbbd-268b0274028a-client-ca" (OuterVolumeSpecName: "client-ca") pod "b1550d14-7d6b-43b9-bbbd-268b0274028a" (UID: "b1550d14-7d6b-43b9-bbbd-268b0274028a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.529545 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1550d14-7d6b-43b9-bbbd-268b0274028a-config" (OuterVolumeSpecName: "config") pod "b1550d14-7d6b-43b9-bbbd-268b0274028a" (UID: "b1550d14-7d6b-43b9-bbbd-268b0274028a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.532442 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1550d14-7d6b-43b9-bbbd-268b0274028a-kube-api-access-4bdkj" (OuterVolumeSpecName: "kube-api-access-4bdkj") pod "b1550d14-7d6b-43b9-bbbd-268b0274028a" (UID: "b1550d14-7d6b-43b9-bbbd-268b0274028a"). InnerVolumeSpecName "kube-api-access-4bdkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.533121 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1550d14-7d6b-43b9-bbbd-268b0274028a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b1550d14-7d6b-43b9-bbbd-268b0274028a" (UID: "b1550d14-7d6b-43b9-bbbd-268b0274028a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.631272 4678 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1550d14-7d6b-43b9-bbbd-268b0274028a-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.631320 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bdkj\" (UniqueName: \"kubernetes.io/projected/b1550d14-7d6b-43b9-bbbd-268b0274028a-kube-api-access-4bdkj\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.631333 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1550d14-7d6b-43b9-bbbd-268b0274028a-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:56 crc kubenswrapper[4678]: I1124 11:19:56.631344 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1550d14-7d6b-43b9-bbbd-268b0274028a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.145661 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.146435 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9sfxt" event={"ID":"dd1948d5-d633-4a92-a800-776add7a0894","Type":"ContainerDied","Data":"18fb14b53b252f397dca48ded7ef0cb718bc5236b24f5a9dde7f4602e8f5f6dd"} Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.146608 4678 scope.go:117] "RemoveContainer" containerID="518bbfc59ceb7601c55c1078931afc8f91780d6822b520315ae5f34489a9c673" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.147316 4678 generic.go:334] "Generic (PLEG): container finished" podID="b1550d14-7d6b-43b9-bbbd-268b0274028a" containerID="19403cd2fe755a390f8dc144980b0e1b3d5ff8d2ed6ea2ed4f32f56f58716992" exitCode=0 Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.147360 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.147359 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" event={"ID":"b1550d14-7d6b-43b9-bbbd-268b0274028a","Type":"ContainerDied","Data":"19403cd2fe755a390f8dc144980b0e1b3d5ff8d2ed6ea2ed4f32f56f58716992"} Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.147418 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h" event={"ID":"b1550d14-7d6b-43b9-bbbd-268b0274028a","Type":"ContainerDied","Data":"4d7c3e527d8069963bb9362d79abad9e3013d4fcb8c5de75c228d944d12c794e"} Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.163793 4678 scope.go:117] "RemoveContainer" containerID="19403cd2fe755a390f8dc144980b0e1b3d5ff8d2ed6ea2ed4f32f56f58716992" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.180833 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9sfxt"] Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.184698 4678 scope.go:117] "RemoveContainer" containerID="19403cd2fe755a390f8dc144980b0e1b3d5ff8d2ed6ea2ed4f32f56f58716992" Nov 24 11:19:57 crc kubenswrapper[4678]: E1124 11:19:57.185277 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19403cd2fe755a390f8dc144980b0e1b3d5ff8d2ed6ea2ed4f32f56f58716992\": container with ID starting with 19403cd2fe755a390f8dc144980b0e1b3d5ff8d2ed6ea2ed4f32f56f58716992 not found: ID does not exist" containerID="19403cd2fe755a390f8dc144980b0e1b3d5ff8d2ed6ea2ed4f32f56f58716992" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.185345 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19403cd2fe755a390f8dc144980b0e1b3d5ff8d2ed6ea2ed4f32f56f58716992"} err="failed to get container status \"19403cd2fe755a390f8dc144980b0e1b3d5ff8d2ed6ea2ed4f32f56f58716992\": rpc error: code = NotFound desc = could not find container \"19403cd2fe755a390f8dc144980b0e1b3d5ff8d2ed6ea2ed4f32f56f58716992\": container with ID starting with 19403cd2fe755a390f8dc144980b0e1b3d5ff8d2ed6ea2ed4f32f56f58716992 not found: ID does not exist" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.186223 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9sfxt"] Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.202917 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h"] Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.205228 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b4d2h"] Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605098 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8555b94568-qzzrp"] Nov 24 11:19:57 crc kubenswrapper[4678]: E1124 11:19:57.605479 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde0a2ac-63b7-4301-9933-34fe08f499a9" containerName="registry-server" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605496 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde0a2ac-63b7-4301-9933-34fe08f499a9" containerName="registry-server" Nov 24 11:19:57 crc kubenswrapper[4678]: E1124 11:19:57.605508 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55ea26a-6c29-4c66-a0db-2a9e94b21f29" containerName="extract-content" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605515 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55ea26a-6c29-4c66-a0db-2a9e94b21f29" containerName="extract-content" Nov 24 11:19:57 crc kubenswrapper[4678]: E1124 11:19:57.605524 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1550d14-7d6b-43b9-bbbd-268b0274028a" containerName="route-controller-manager" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605531 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1550d14-7d6b-43b9-bbbd-268b0274028a" containerName="route-controller-manager" Nov 24 11:19:57 crc kubenswrapper[4678]: E1124 11:19:57.605538 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55ea26a-6c29-4c66-a0db-2a9e94b21f29" containerName="extract-utilities" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605545 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55ea26a-6c29-4c66-a0db-2a9e94b21f29" containerName="extract-utilities" Nov 24 11:19:57 crc kubenswrapper[4678]: E1124 11:19:57.605553 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde0a2ac-63b7-4301-9933-34fe08f499a9" containerName="extract-content" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605559 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde0a2ac-63b7-4301-9933-34fe08f499a9" containerName="extract-content" Nov 24 11:19:57 crc kubenswrapper[4678]: E1124 11:19:57.605567 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="224e7e28-2c19-4df5-bdab-6bd57cfb93ac" containerName="extract-content" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605573 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="224e7e28-2c19-4df5-bdab-6bd57cfb93ac" containerName="extract-content" Nov 24 11:19:57 crc kubenswrapper[4678]: E1124 11:19:57.605582 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3f3023-4b33-49cc-96d5-ba93bb9c0e68" containerName="registry-server" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605590 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3f3023-4b33-49cc-96d5-ba93bb9c0e68" containerName="registry-server" Nov 24 11:19:57 crc kubenswrapper[4678]: E1124 11:19:57.605597 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd1948d5-d633-4a92-a800-776add7a0894" containerName="controller-manager" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605603 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd1948d5-d633-4a92-a800-776add7a0894" containerName="controller-manager" Nov 24 11:19:57 crc kubenswrapper[4678]: E1124 11:19:57.605611 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="224e7e28-2c19-4df5-bdab-6bd57cfb93ac" containerName="registry-server" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605616 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="224e7e28-2c19-4df5-bdab-6bd57cfb93ac" containerName="registry-server" Nov 24 11:19:57 crc kubenswrapper[4678]: E1124 11:19:57.605627 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55ea26a-6c29-4c66-a0db-2a9e94b21f29" containerName="registry-server" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605632 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55ea26a-6c29-4c66-a0db-2a9e94b21f29" containerName="registry-server" Nov 24 11:19:57 crc kubenswrapper[4678]: E1124 11:19:57.605642 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde0a2ac-63b7-4301-9933-34fe08f499a9" containerName="extract-utilities" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605647 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde0a2ac-63b7-4301-9933-34fe08f499a9" containerName="extract-utilities" Nov 24 11:19:57 crc kubenswrapper[4678]: E1124 11:19:57.605656 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c764a93d-9afc-48b6-aabc-5f46d7ee745d" containerName="pruner" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605662 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c764a93d-9afc-48b6-aabc-5f46d7ee745d" containerName="pruner" Nov 24 11:19:57 crc kubenswrapper[4678]: E1124 11:19:57.605695 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3f3023-4b33-49cc-96d5-ba93bb9c0e68" containerName="extract-content" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605700 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3f3023-4b33-49cc-96d5-ba93bb9c0e68" containerName="extract-content" Nov 24 11:19:57 crc kubenswrapper[4678]: E1124 11:19:57.605707 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="224e7e28-2c19-4df5-bdab-6bd57cfb93ac" containerName="extract-utilities" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605713 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="224e7e28-2c19-4df5-bdab-6bd57cfb93ac" containerName="extract-utilities" Nov 24 11:19:57 crc kubenswrapper[4678]: E1124 11:19:57.605720 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3f3023-4b33-49cc-96d5-ba93bb9c0e68" containerName="extract-utilities" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605726 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3f3023-4b33-49cc-96d5-ba93bb9c0e68" containerName="extract-utilities" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605827 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="c764a93d-9afc-48b6-aabc-5f46d7ee745d" containerName="pruner" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605840 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd1948d5-d633-4a92-a800-776add7a0894" containerName="controller-manager" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605849 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="224e7e28-2c19-4df5-bdab-6bd57cfb93ac" containerName="registry-server" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605856 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e3f3023-4b33-49cc-96d5-ba93bb9c0e68" containerName="registry-server" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605864 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="cde0a2ac-63b7-4301-9933-34fe08f499a9" containerName="registry-server" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605874 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1550d14-7d6b-43b9-bbbd-268b0274028a" containerName="route-controller-manager" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.605881 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="d55ea26a-6c29-4c66-a0db-2a9e94b21f29" containerName="registry-server" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.606451 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.609933 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc"] Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.610906 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.611045 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.611944 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.611953 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.612241 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.613874 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.614388 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.615189 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.615220 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.616324 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.616375 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.616716 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.616868 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.631062 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.638405 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8555b94568-qzzrp"] Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.642853 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9359744b-1f23-4dc4-ab3d-485214d347e5-serving-cert\") pod \"controller-manager-8555b94568-qzzrp\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.642907 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27e783bc-5cb0-428a-a977-c1eb7b833a26-client-ca\") pod \"route-controller-manager-8697c66b67-ptbcc\" (UID: \"27e783bc-5cb0-428a-a977-c1eb7b833a26\") " pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.642942 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-client-ca\") pod \"controller-manager-8555b94568-qzzrp\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.642986 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-proxy-ca-bundles\") pod \"controller-manager-8555b94568-qzzrp\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.643196 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqbd7\" (UniqueName: \"kubernetes.io/projected/27e783bc-5cb0-428a-a977-c1eb7b833a26-kube-api-access-vqbd7\") pod \"route-controller-manager-8697c66b67-ptbcc\" (UID: \"27e783bc-5cb0-428a-a977-c1eb7b833a26\") " pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.643348 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e783bc-5cb0-428a-a977-c1eb7b833a26-serving-cert\") pod \"route-controller-manager-8697c66b67-ptbcc\" (UID: \"27e783bc-5cb0-428a-a977-c1eb7b833a26\") " pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.643468 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-config\") pod \"controller-manager-8555b94568-qzzrp\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.643597 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-458vv\" (UniqueName: \"kubernetes.io/projected/9359744b-1f23-4dc4-ab3d-485214d347e5-kube-api-access-458vv\") pod \"controller-manager-8555b94568-qzzrp\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.643639 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e783bc-5cb0-428a-a977-c1eb7b833a26-config\") pod \"route-controller-manager-8697c66b67-ptbcc\" (UID: \"27e783bc-5cb0-428a-a977-c1eb7b833a26\") " pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.662568 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc"] Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.744344 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-config\") pod \"controller-manager-8555b94568-qzzrp\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.744420 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-458vv\" (UniqueName: \"kubernetes.io/projected/9359744b-1f23-4dc4-ab3d-485214d347e5-kube-api-access-458vv\") pod \"controller-manager-8555b94568-qzzrp\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.744442 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e783bc-5cb0-428a-a977-c1eb7b833a26-config\") pod \"route-controller-manager-8697c66b67-ptbcc\" (UID: \"27e783bc-5cb0-428a-a977-c1eb7b833a26\") " pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.744473 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9359744b-1f23-4dc4-ab3d-485214d347e5-serving-cert\") pod \"controller-manager-8555b94568-qzzrp\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.744490 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27e783bc-5cb0-428a-a977-c1eb7b833a26-client-ca\") pod \"route-controller-manager-8697c66b67-ptbcc\" (UID: \"27e783bc-5cb0-428a-a977-c1eb7b833a26\") " pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.744515 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-client-ca\") pod \"controller-manager-8555b94568-qzzrp\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.744549 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-proxy-ca-bundles\") pod \"controller-manager-8555b94568-qzzrp\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.744569 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqbd7\" (UniqueName: \"kubernetes.io/projected/27e783bc-5cb0-428a-a977-c1eb7b833a26-kube-api-access-vqbd7\") pod \"route-controller-manager-8697c66b67-ptbcc\" (UID: \"27e783bc-5cb0-428a-a977-c1eb7b833a26\") " pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.744589 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e783bc-5cb0-428a-a977-c1eb7b833a26-serving-cert\") pod \"route-controller-manager-8697c66b67-ptbcc\" (UID: \"27e783bc-5cb0-428a-a977-c1eb7b833a26\") " pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.746032 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-client-ca\") pod \"controller-manager-8555b94568-qzzrp\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.746285 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e783bc-5cb0-428a-a977-c1eb7b833a26-config\") pod \"route-controller-manager-8697c66b67-ptbcc\" (UID: \"27e783bc-5cb0-428a-a977-c1eb7b833a26\") " pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.746660 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27e783bc-5cb0-428a-a977-c1eb7b833a26-client-ca\") pod \"route-controller-manager-8697c66b67-ptbcc\" (UID: \"27e783bc-5cb0-428a-a977-c1eb7b833a26\") " pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.747624 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-proxy-ca-bundles\") pod \"controller-manager-8555b94568-qzzrp\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.747765 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-config\") pod \"controller-manager-8555b94568-qzzrp\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.751564 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e783bc-5cb0-428a-a977-c1eb7b833a26-serving-cert\") pod \"route-controller-manager-8697c66b67-ptbcc\" (UID: \"27e783bc-5cb0-428a-a977-c1eb7b833a26\") " pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.755179 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9359744b-1f23-4dc4-ab3d-485214d347e5-serving-cert\") pod \"controller-manager-8555b94568-qzzrp\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.763418 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqbd7\" (UniqueName: \"kubernetes.io/projected/27e783bc-5cb0-428a-a977-c1eb7b833a26-kube-api-access-vqbd7\") pod \"route-controller-manager-8697c66b67-ptbcc\" (UID: \"27e783bc-5cb0-428a-a977-c1eb7b833a26\") " pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.767404 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-458vv\" (UniqueName: \"kubernetes.io/projected/9359744b-1f23-4dc4-ab3d-485214d347e5-kube-api-access-458vv\") pod \"controller-manager-8555b94568-qzzrp\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.903354 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1550d14-7d6b-43b9-bbbd-268b0274028a" path="/var/lib/kubelet/pods/b1550d14-7d6b-43b9-bbbd-268b0274028a/volumes" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.904785 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd1948d5-d633-4a92-a800-776add7a0894" path="/var/lib/kubelet/pods/dd1948d5-d633-4a92-a800-776add7a0894/volumes" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.929363 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:57 crc kubenswrapper[4678]: I1124 11:19:57.943296 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:19:58 crc kubenswrapper[4678]: I1124 11:19:58.179080 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8555b94568-qzzrp"] Nov 24 11:19:58 crc kubenswrapper[4678]: I1124 11:19:58.211160 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc"] Nov 24 11:19:58 crc kubenswrapper[4678]: W1124 11:19:58.219110 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27e783bc_5cb0_428a_a977_c1eb7b833a26.slice/crio-b2f80aedefc4616062e40c7fcbf88c3bc0ed41c72f78f93d446d2ebbc9b1285f WatchSource:0}: Error finding container b2f80aedefc4616062e40c7fcbf88c3bc0ed41c72f78f93d446d2ebbc9b1285f: Status 404 returned error can't find the container with id b2f80aedefc4616062e40c7fcbf88c3bc0ed41c72f78f93d446d2ebbc9b1285f Nov 24 11:19:59 crc kubenswrapper[4678]: I1124 11:19:59.168800 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" event={"ID":"27e783bc-5cb0-428a-a977-c1eb7b833a26","Type":"ContainerStarted","Data":"5aafa4dee5e6ae9bcc848b3dda23717ab201562d3ba3e67178a19cda9380827c"} Nov 24 11:19:59 crc kubenswrapper[4678]: I1124 11:19:59.174126 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" event={"ID":"27e783bc-5cb0-428a-a977-c1eb7b833a26","Type":"ContainerStarted","Data":"b2f80aedefc4616062e40c7fcbf88c3bc0ed41c72f78f93d446d2ebbc9b1285f"} Nov 24 11:19:59 crc kubenswrapper[4678]: I1124 11:19:59.174240 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:19:59 crc kubenswrapper[4678]: I1124 11:19:59.175909 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" event={"ID":"9359744b-1f23-4dc4-ab3d-485214d347e5","Type":"ContainerStarted","Data":"2c56207ac2612e70dc6749ac3fe612b62646c5c9da88dd694420177c93eca659"} Nov 24 11:19:59 crc kubenswrapper[4678]: I1124 11:19:59.175942 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" event={"ID":"9359744b-1f23-4dc4-ab3d-485214d347e5","Type":"ContainerStarted","Data":"6924c51ce5d5443e145e8dd10a0e34043c5ccd9ecefeeb3fa93f4ab1fe3a75af"} Nov 24 11:19:59 crc kubenswrapper[4678]: I1124 11:19:59.176929 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:59 crc kubenswrapper[4678]: I1124 11:19:59.178153 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:19:59 crc kubenswrapper[4678]: I1124 11:19:59.182369 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:19:59 crc kubenswrapper[4678]: I1124 11:19:59.211512 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" podStartSLOduration=3.211492243 podStartE2EDuration="3.211492243s" podCreationTimestamp="2025-11-24 11:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:19:59.191640674 +0000 UTC m=+210.122700303" watchObservedRunningTime="2025-11-24 11:19:59.211492243 +0000 UTC m=+210.142551882" Nov 24 11:19:59 crc kubenswrapper[4678]: I1124 11:19:59.234872 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" podStartSLOduration=4.234847546 podStartE2EDuration="4.234847546s" podCreationTimestamp="2025-11-24 11:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:19:59.232863427 +0000 UTC m=+210.163923066" watchObservedRunningTime="2025-11-24 11:19:59.234847546 +0000 UTC m=+210.165907185" Nov 24 11:20:00 crc kubenswrapper[4678]: I1124 11:20:00.296357 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:20:00 crc kubenswrapper[4678]: I1124 11:20:00.296443 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:20:00 crc kubenswrapper[4678]: I1124 11:20:00.296508 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:20:00 crc kubenswrapper[4678]: I1124 11:20:00.297242 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:20:00 crc kubenswrapper[4678]: I1124 11:20:00.297336 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6" gracePeriod=600 Nov 24 11:20:01 crc kubenswrapper[4678]: I1124 11:20:01.192037 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6" exitCode=0 Nov 24 11:20:01 crc kubenswrapper[4678]: I1124 11:20:01.192140 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6"} Nov 24 11:20:01 crc kubenswrapper[4678]: I1124 11:20:01.192631 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"71975c2ba1a669dde4cf0c96567433189448d817b027616751a53013ba5e4709"} Nov 24 11:20:05 crc kubenswrapper[4678]: I1124 11:20:05.550113 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" podUID="019dfbed-3859-4761-890e-cd8205747454" containerName="oauth-openshift" containerID="cri-o://430ea12bc18953e8cb5d4557604c66d96ad46ed46ddaea527f1d9791cbd09686" gracePeriod=15 Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.051906 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.097598 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-login\") pod \"019dfbed-3859-4761-890e-cd8205747454\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.097654 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-cliconfig\") pod \"019dfbed-3859-4761-890e-cd8205747454\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.097712 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/019dfbed-3859-4761-890e-cd8205747454-audit-dir\") pod \"019dfbed-3859-4761-890e-cd8205747454\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.097740 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-session\") pod \"019dfbed-3859-4761-890e-cd8205747454\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.097817 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-955rz\" (UniqueName: \"kubernetes.io/projected/019dfbed-3859-4761-890e-cd8205747454-kube-api-access-955rz\") pod \"019dfbed-3859-4761-890e-cd8205747454\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.097845 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-serving-cert\") pod \"019dfbed-3859-4761-890e-cd8205747454\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.097872 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-router-certs\") pod \"019dfbed-3859-4761-890e-cd8205747454\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.097901 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-audit-policies\") pod \"019dfbed-3859-4761-890e-cd8205747454\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.097921 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-idp-0-file-data\") pod \"019dfbed-3859-4761-890e-cd8205747454\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.097944 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-service-ca\") pod \"019dfbed-3859-4761-890e-cd8205747454\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.097977 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-trusted-ca-bundle\") pod \"019dfbed-3859-4761-890e-cd8205747454\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.098014 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-error\") pod \"019dfbed-3859-4761-890e-cd8205747454\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.098048 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-ocp-branding-template\") pod \"019dfbed-3859-4761-890e-cd8205747454\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.098070 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-provider-selection\") pod \"019dfbed-3859-4761-890e-cd8205747454\" (UID: \"019dfbed-3859-4761-890e-cd8205747454\") " Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.099483 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/019dfbed-3859-4761-890e-cd8205747454-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "019dfbed-3859-4761-890e-cd8205747454" (UID: "019dfbed-3859-4761-890e-cd8205747454"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.100042 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "019dfbed-3859-4761-890e-cd8205747454" (UID: "019dfbed-3859-4761-890e-cd8205747454"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.100386 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "019dfbed-3859-4761-890e-cd8205747454" (UID: "019dfbed-3859-4761-890e-cd8205747454"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.101053 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "019dfbed-3859-4761-890e-cd8205747454" (UID: "019dfbed-3859-4761-890e-cd8205747454"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.101273 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "019dfbed-3859-4761-890e-cd8205747454" (UID: "019dfbed-3859-4761-890e-cd8205747454"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.106391 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "019dfbed-3859-4761-890e-cd8205747454" (UID: "019dfbed-3859-4761-890e-cd8205747454"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.106544 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/019dfbed-3859-4761-890e-cd8205747454-kube-api-access-955rz" (OuterVolumeSpecName: "kube-api-access-955rz") pod "019dfbed-3859-4761-890e-cd8205747454" (UID: "019dfbed-3859-4761-890e-cd8205747454"). InnerVolumeSpecName "kube-api-access-955rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.107042 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "019dfbed-3859-4761-890e-cd8205747454" (UID: "019dfbed-3859-4761-890e-cd8205747454"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.108013 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "019dfbed-3859-4761-890e-cd8205747454" (UID: "019dfbed-3859-4761-890e-cd8205747454"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.108458 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "019dfbed-3859-4761-890e-cd8205747454" (UID: "019dfbed-3859-4761-890e-cd8205747454"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.108746 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "019dfbed-3859-4761-890e-cd8205747454" (UID: "019dfbed-3859-4761-890e-cd8205747454"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.113182 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "019dfbed-3859-4761-890e-cd8205747454" (UID: "019dfbed-3859-4761-890e-cd8205747454"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.113326 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "019dfbed-3859-4761-890e-cd8205747454" (UID: "019dfbed-3859-4761-890e-cd8205747454"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.113591 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "019dfbed-3859-4761-890e-cd8205747454" (UID: "019dfbed-3859-4761-890e-cd8205747454"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.199268 4678 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/019dfbed-3859-4761-890e-cd8205747454-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.199325 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.199343 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-955rz\" (UniqueName: \"kubernetes.io/projected/019dfbed-3859-4761-890e-cd8205747454-kube-api-access-955rz\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.199354 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.199370 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.199383 4678 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.199399 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.199413 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.199425 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.199437 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.199449 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.199461 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.199476 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.199488 4678 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/019dfbed-3859-4761-890e-cd8205747454-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.232845 4678 generic.go:334] "Generic (PLEG): container finished" podID="019dfbed-3859-4761-890e-cd8205747454" containerID="430ea12bc18953e8cb5d4557604c66d96ad46ed46ddaea527f1d9791cbd09686" exitCode=0 Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.232935 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" event={"ID":"019dfbed-3859-4761-890e-cd8205747454","Type":"ContainerDied","Data":"430ea12bc18953e8cb5d4557604c66d96ad46ed46ddaea527f1d9791cbd09686"} Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.232974 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" event={"ID":"019dfbed-3859-4761-890e-cd8205747454","Type":"ContainerDied","Data":"0b473e10bcc98d7a8a8ada1a91fd204b7e763e0afeb35bb0d03adea7a1e9ec61"} Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.232997 4678 scope.go:117] "RemoveContainer" containerID="430ea12bc18953e8cb5d4557604c66d96ad46ed46ddaea527f1d9791cbd09686" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.232937 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tf9mj" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.257741 4678 scope.go:117] "RemoveContainer" containerID="430ea12bc18953e8cb5d4557604c66d96ad46ed46ddaea527f1d9791cbd09686" Nov 24 11:20:06 crc kubenswrapper[4678]: E1124 11:20:06.258247 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"430ea12bc18953e8cb5d4557604c66d96ad46ed46ddaea527f1d9791cbd09686\": container with ID starting with 430ea12bc18953e8cb5d4557604c66d96ad46ed46ddaea527f1d9791cbd09686 not found: ID does not exist" containerID="430ea12bc18953e8cb5d4557604c66d96ad46ed46ddaea527f1d9791cbd09686" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.258277 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430ea12bc18953e8cb5d4557604c66d96ad46ed46ddaea527f1d9791cbd09686"} err="failed to get container status \"430ea12bc18953e8cb5d4557604c66d96ad46ed46ddaea527f1d9791cbd09686\": rpc error: code = NotFound desc = could not find container \"430ea12bc18953e8cb5d4557604c66d96ad46ed46ddaea527f1d9791cbd09686\": container with ID starting with 430ea12bc18953e8cb5d4557604c66d96ad46ed46ddaea527f1d9791cbd09686 not found: ID does not exist" Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.274444 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tf9mj"] Nov 24 11:20:06 crc kubenswrapper[4678]: I1124 11:20:06.282359 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tf9mj"] Nov 24 11:20:07 crc kubenswrapper[4678]: I1124 11:20:07.905981 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="019dfbed-3859-4761-890e-cd8205747454" path="/var/lib/kubelet/pods/019dfbed-3859-4761-890e-cd8205747454/volumes" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.619012 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-bcd64c88d-bcmqt"] Nov 24 11:20:08 crc kubenswrapper[4678]: E1124 11:20:08.619422 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019dfbed-3859-4761-890e-cd8205747454" containerName="oauth-openshift" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.619471 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="019dfbed-3859-4761-890e-cd8205747454" containerName="oauth-openshift" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.619717 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="019dfbed-3859-4761-890e-cd8205747454" containerName="oauth-openshift" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.620499 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.623369 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.629455 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.629488 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.629626 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.629841 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.630093 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.630210 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.630260 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.630220 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.630508 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.631321 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.632595 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.647790 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-bcd64c88d-bcmqt"] Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.648164 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.648900 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.658456 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.735535 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-user-template-login\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.735587 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.735621 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b0e58b0-57be-433a-abb0-a2aaced99beb-audit-policies\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.735641 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.735689 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-session\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.735710 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-router-certs\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.735840 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.735955 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.736028 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-service-ca\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.736080 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfxlp\" (UniqueName: \"kubernetes.io/projected/0b0e58b0-57be-433a-abb0-a2aaced99beb-kube-api-access-sfxlp\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.736146 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-user-template-error\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.736190 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b0e58b0-57be-433a-abb0-a2aaced99beb-audit-dir\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.736251 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.736316 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.837923 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-user-template-error\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.838005 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b0e58b0-57be-433a-abb0-a2aaced99beb-audit-dir\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.838074 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.838129 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.838150 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-user-template-login\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.838177 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.838203 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b0e58b0-57be-433a-abb0-a2aaced99beb-audit-policies\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.838221 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.838243 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-session\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.838264 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-router-certs\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.838288 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.838323 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.838353 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-service-ca\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.838380 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfxlp\" (UniqueName: \"kubernetes.io/projected/0b0e58b0-57be-433a-abb0-a2aaced99beb-kube-api-access-sfxlp\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.838839 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b0e58b0-57be-433a-abb0-a2aaced99beb-audit-dir\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.840100 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.840534 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-service-ca\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.841098 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.841340 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b0e58b0-57be-433a-abb0-a2aaced99beb-audit-policies\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.844606 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.844695 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-user-template-error\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.844610 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.850111 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.850729 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-user-template-login\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.852473 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.852503 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-session\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.854419 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b0e58b0-57be-433a-abb0-a2aaced99beb-v4-0-config-system-router-certs\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.872355 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfxlp\" (UniqueName: \"kubernetes.io/projected/0b0e58b0-57be-433a-abb0-a2aaced99beb-kube-api-access-sfxlp\") pod \"oauth-openshift-bcd64c88d-bcmqt\" (UID: \"0b0e58b0-57be-433a-abb0-a2aaced99beb\") " pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:08 crc kubenswrapper[4678]: I1124 11:20:08.948548 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:09 crc kubenswrapper[4678]: I1124 11:20:09.444904 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-bcd64c88d-bcmqt"] Nov 24 11:20:10 crc kubenswrapper[4678]: I1124 11:20:10.265016 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" event={"ID":"0b0e58b0-57be-433a-abb0-a2aaced99beb","Type":"ContainerStarted","Data":"9cbf0008e204f541de74e46f8408683e9b1a4c71184c7de79b1a7248897dc41d"} Nov 24 11:20:10 crc kubenswrapper[4678]: I1124 11:20:10.266072 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:10 crc kubenswrapper[4678]: I1124 11:20:10.266091 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" event={"ID":"0b0e58b0-57be-433a-abb0-a2aaced99beb","Type":"ContainerStarted","Data":"bacdec8ed0b3ef373664142dcce8f33d786cbdf060c68022bff2365bf1e487d9"} Nov 24 11:20:10 crc kubenswrapper[4678]: I1124 11:20:10.274100 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" Nov 24 11:20:10 crc kubenswrapper[4678]: I1124 11:20:10.290567 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-bcd64c88d-bcmqt" podStartSLOduration=30.290544259 podStartE2EDuration="30.290544259s" podCreationTimestamp="2025-11-24 11:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:20:10.287285663 +0000 UTC m=+221.218345322" watchObservedRunningTime="2025-11-24 11:20:10.290544259 +0000 UTC m=+221.221603908" Nov 24 11:20:15 crc kubenswrapper[4678]: I1124 11:20:15.988855 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8555b94568-qzzrp"] Nov 24 11:20:15 crc kubenswrapper[4678]: I1124 11:20:15.992741 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" podUID="9359744b-1f23-4dc4-ab3d-485214d347e5" containerName="controller-manager" containerID="cri-o://2c56207ac2612e70dc6749ac3fe612b62646c5c9da88dd694420177c93eca659" gracePeriod=30 Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.016059 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc"] Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.016877 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" podUID="27e783bc-5cb0-428a-a977-c1eb7b833a26" containerName="route-controller-manager" containerID="cri-o://5aafa4dee5e6ae9bcc848b3dda23717ab201562d3ba3e67178a19cda9380827c" gracePeriod=30 Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.308780 4678 generic.go:334] "Generic (PLEG): container finished" podID="9359744b-1f23-4dc4-ab3d-485214d347e5" containerID="2c56207ac2612e70dc6749ac3fe612b62646c5c9da88dd694420177c93eca659" exitCode=0 Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.309127 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" event={"ID":"9359744b-1f23-4dc4-ab3d-485214d347e5","Type":"ContainerDied","Data":"2c56207ac2612e70dc6749ac3fe612b62646c5c9da88dd694420177c93eca659"} Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.310954 4678 generic.go:334] "Generic (PLEG): container finished" podID="27e783bc-5cb0-428a-a977-c1eb7b833a26" containerID="5aafa4dee5e6ae9bcc848b3dda23717ab201562d3ba3e67178a19cda9380827c" exitCode=0 Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.311018 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" event={"ID":"27e783bc-5cb0-428a-a977-c1eb7b833a26","Type":"ContainerDied","Data":"5aafa4dee5e6ae9bcc848b3dda23717ab201562d3ba3e67178a19cda9380827c"} Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.576901 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.661039 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqbd7\" (UniqueName: \"kubernetes.io/projected/27e783bc-5cb0-428a-a977-c1eb7b833a26-kube-api-access-vqbd7\") pod \"27e783bc-5cb0-428a-a977-c1eb7b833a26\" (UID: \"27e783bc-5cb0-428a-a977-c1eb7b833a26\") " Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.661122 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e783bc-5cb0-428a-a977-c1eb7b833a26-serving-cert\") pod \"27e783bc-5cb0-428a-a977-c1eb7b833a26\" (UID: \"27e783bc-5cb0-428a-a977-c1eb7b833a26\") " Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.661273 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e783bc-5cb0-428a-a977-c1eb7b833a26-config\") pod \"27e783bc-5cb0-428a-a977-c1eb7b833a26\" (UID: \"27e783bc-5cb0-428a-a977-c1eb7b833a26\") " Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.661373 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27e783bc-5cb0-428a-a977-c1eb7b833a26-client-ca\") pod \"27e783bc-5cb0-428a-a977-c1eb7b833a26\" (UID: \"27e783bc-5cb0-428a-a977-c1eb7b833a26\") " Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.662456 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27e783bc-5cb0-428a-a977-c1eb7b833a26-client-ca" (OuterVolumeSpecName: "client-ca") pod "27e783bc-5cb0-428a-a977-c1eb7b833a26" (UID: "27e783bc-5cb0-428a-a977-c1eb7b833a26"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.663091 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27e783bc-5cb0-428a-a977-c1eb7b833a26-config" (OuterVolumeSpecName: "config") pod "27e783bc-5cb0-428a-a977-c1eb7b833a26" (UID: "27e783bc-5cb0-428a-a977-c1eb7b833a26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.668457 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e783bc-5cb0-428a-a977-c1eb7b833a26-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "27e783bc-5cb0-428a-a977-c1eb7b833a26" (UID: "27e783bc-5cb0-428a-a977-c1eb7b833a26"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.668653 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e783bc-5cb0-428a-a977-c1eb7b833a26-kube-api-access-vqbd7" (OuterVolumeSpecName: "kube-api-access-vqbd7") pod "27e783bc-5cb0-428a-a977-c1eb7b833a26" (UID: "27e783bc-5cb0-428a-a977-c1eb7b833a26"). InnerVolumeSpecName "kube-api-access-vqbd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.674491 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.763232 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-config\") pod \"9359744b-1f23-4dc4-ab3d-485214d347e5\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.763299 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-proxy-ca-bundles\") pod \"9359744b-1f23-4dc4-ab3d-485214d347e5\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.763341 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-458vv\" (UniqueName: \"kubernetes.io/projected/9359744b-1f23-4dc4-ab3d-485214d347e5-kube-api-access-458vv\") pod \"9359744b-1f23-4dc4-ab3d-485214d347e5\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.763402 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9359744b-1f23-4dc4-ab3d-485214d347e5-serving-cert\") pod \"9359744b-1f23-4dc4-ab3d-485214d347e5\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.763447 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-client-ca\") pod \"9359744b-1f23-4dc4-ab3d-485214d347e5\" (UID: \"9359744b-1f23-4dc4-ab3d-485214d347e5\") " Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.763878 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqbd7\" (UniqueName: \"kubernetes.io/projected/27e783bc-5cb0-428a-a977-c1eb7b833a26-kube-api-access-vqbd7\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.763898 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e783bc-5cb0-428a-a977-c1eb7b833a26-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.763908 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e783bc-5cb0-428a-a977-c1eb7b833a26-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.763917 4678 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27e783bc-5cb0-428a-a977-c1eb7b833a26-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.765081 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-config" (OuterVolumeSpecName: "config") pod "9359744b-1f23-4dc4-ab3d-485214d347e5" (UID: "9359744b-1f23-4dc4-ab3d-485214d347e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.768450 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9359744b-1f23-4dc4-ab3d-485214d347e5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9359744b-1f23-4dc4-ab3d-485214d347e5" (UID: "9359744b-1f23-4dc4-ab3d-485214d347e5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.778078 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-client-ca" (OuterVolumeSpecName: "client-ca") pod "9359744b-1f23-4dc4-ab3d-485214d347e5" (UID: "9359744b-1f23-4dc4-ab3d-485214d347e5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.780027 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9359744b-1f23-4dc4-ab3d-485214d347e5" (UID: "9359744b-1f23-4dc4-ab3d-485214d347e5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.780436 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9359744b-1f23-4dc4-ab3d-485214d347e5-kube-api-access-458vv" (OuterVolumeSpecName: "kube-api-access-458vv") pod "9359744b-1f23-4dc4-ab3d-485214d347e5" (UID: "9359744b-1f23-4dc4-ab3d-485214d347e5"). InnerVolumeSpecName "kube-api-access-458vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.865564 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.865615 4678 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.865634 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-458vv\" (UniqueName: \"kubernetes.io/projected/9359744b-1f23-4dc4-ab3d-485214d347e5-kube-api-access-458vv\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.865649 4678 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9359744b-1f23-4dc4-ab3d-485214d347e5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:16 crc kubenswrapper[4678]: I1124 11:20:16.865662 4678 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9359744b-1f23-4dc4-ab3d-485214d347e5-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.319776 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" event={"ID":"27e783bc-5cb0-428a-a977-c1eb7b833a26","Type":"ContainerDied","Data":"b2f80aedefc4616062e40c7fcbf88c3bc0ed41c72f78f93d446d2ebbc9b1285f"} Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.319841 4678 scope.go:117] "RemoveContainer" containerID="5aafa4dee5e6ae9bcc848b3dda23717ab201562d3ba3e67178a19cda9380827c" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.319953 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.323113 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" event={"ID":"9359744b-1f23-4dc4-ab3d-485214d347e5","Type":"ContainerDied","Data":"6924c51ce5d5443e145e8dd10a0e34043c5ccd9ecefeeb3fa93f4ab1fe3a75af"} Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.323239 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8555b94568-qzzrp" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.341455 4678 scope.go:117] "RemoveContainer" containerID="2c56207ac2612e70dc6749ac3fe612b62646c5c9da88dd694420177c93eca659" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.369953 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc"] Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.373312 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8697c66b67-ptbcc"] Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.379653 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8555b94568-qzzrp"] Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.382614 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8555b94568-qzzrp"] Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.629033 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482"] Nov 24 11:20:17 crc kubenswrapper[4678]: E1124 11:20:17.629454 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e783bc-5cb0-428a-a977-c1eb7b833a26" containerName="route-controller-manager" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.629498 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e783bc-5cb0-428a-a977-c1eb7b833a26" containerName="route-controller-manager" Nov 24 11:20:17 crc kubenswrapper[4678]: E1124 11:20:17.629529 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9359744b-1f23-4dc4-ab3d-485214d347e5" containerName="controller-manager" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.629539 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="9359744b-1f23-4dc4-ab3d-485214d347e5" containerName="controller-manager" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.629705 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="9359744b-1f23-4dc4-ab3d-485214d347e5" containerName="controller-manager" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.629728 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e783bc-5cb0-428a-a977-c1eb7b833a26" containerName="route-controller-manager" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.630456 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.632848 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.633026 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.633035 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.633040 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.633028 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.633433 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2"] Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.636226 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.638745 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.641137 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482"] Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.643498 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.643771 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.643817 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.644279 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.644396 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.644536 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.665701 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.667511 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2"] Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.678431 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpsqh\" (UniqueName: \"kubernetes.io/projected/ded4510e-5378-4756-b1f4-c8b8fb801003-kube-api-access-mpsqh\") pod \"route-controller-manager-7487f488dc-2w482\" (UID: \"ded4510e-5378-4756-b1f4-c8b8fb801003\") " pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.678532 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded4510e-5378-4756-b1f4-c8b8fb801003-config\") pod \"route-controller-manager-7487f488dc-2w482\" (UID: \"ded4510e-5378-4756-b1f4-c8b8fb801003\") " pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.678702 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ded4510e-5378-4756-b1f4-c8b8fb801003-client-ca\") pod \"route-controller-manager-7487f488dc-2w482\" (UID: \"ded4510e-5378-4756-b1f4-c8b8fb801003\") " pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.678806 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ded4510e-5378-4756-b1f4-c8b8fb801003-serving-cert\") pod \"route-controller-manager-7487f488dc-2w482\" (UID: \"ded4510e-5378-4756-b1f4-c8b8fb801003\") " pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.780311 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ded4510e-5378-4756-b1f4-c8b8fb801003-serving-cert\") pod \"route-controller-manager-7487f488dc-2w482\" (UID: \"ded4510e-5378-4756-b1f4-c8b8fb801003\") " pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.780415 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c9b633c-3813-4953-ac35-15257be56af7-config\") pod \"controller-manager-d5b5b67b4-pq4w2\" (UID: \"0c9b633c-3813-4953-ac35-15257be56af7\") " pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.780461 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c9b633c-3813-4953-ac35-15257be56af7-client-ca\") pod \"controller-manager-d5b5b67b4-pq4w2\" (UID: \"0c9b633c-3813-4953-ac35-15257be56af7\") " pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.780487 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpsqh\" (UniqueName: \"kubernetes.io/projected/ded4510e-5378-4756-b1f4-c8b8fb801003-kube-api-access-mpsqh\") pod \"route-controller-manager-7487f488dc-2w482\" (UID: \"ded4510e-5378-4756-b1f4-c8b8fb801003\") " pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.780517 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8d9d\" (UniqueName: \"kubernetes.io/projected/0c9b633c-3813-4953-ac35-15257be56af7-kube-api-access-b8d9d\") pod \"controller-manager-d5b5b67b4-pq4w2\" (UID: \"0c9b633c-3813-4953-ac35-15257be56af7\") " pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.780561 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded4510e-5378-4756-b1f4-c8b8fb801003-config\") pod \"route-controller-manager-7487f488dc-2w482\" (UID: \"ded4510e-5378-4756-b1f4-c8b8fb801003\") " pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.780633 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c9b633c-3813-4953-ac35-15257be56af7-proxy-ca-bundles\") pod \"controller-manager-d5b5b67b4-pq4w2\" (UID: \"0c9b633c-3813-4953-ac35-15257be56af7\") " pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.780662 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ded4510e-5378-4756-b1f4-c8b8fb801003-client-ca\") pod \"route-controller-manager-7487f488dc-2w482\" (UID: \"ded4510e-5378-4756-b1f4-c8b8fb801003\") " pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.780719 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c9b633c-3813-4953-ac35-15257be56af7-serving-cert\") pod \"controller-manager-d5b5b67b4-pq4w2\" (UID: \"0c9b633c-3813-4953-ac35-15257be56af7\") " pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.782149 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ded4510e-5378-4756-b1f4-c8b8fb801003-client-ca\") pod \"route-controller-manager-7487f488dc-2w482\" (UID: \"ded4510e-5378-4756-b1f4-c8b8fb801003\") " pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.782427 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded4510e-5378-4756-b1f4-c8b8fb801003-config\") pod \"route-controller-manager-7487f488dc-2w482\" (UID: \"ded4510e-5378-4756-b1f4-c8b8fb801003\") " pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.789496 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ded4510e-5378-4756-b1f4-c8b8fb801003-serving-cert\") pod \"route-controller-manager-7487f488dc-2w482\" (UID: \"ded4510e-5378-4756-b1f4-c8b8fb801003\") " pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.802414 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpsqh\" (UniqueName: \"kubernetes.io/projected/ded4510e-5378-4756-b1f4-c8b8fb801003-kube-api-access-mpsqh\") pod \"route-controller-manager-7487f488dc-2w482\" (UID: \"ded4510e-5378-4756-b1f4-c8b8fb801003\") " pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.881842 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c9b633c-3813-4953-ac35-15257be56af7-client-ca\") pod \"controller-manager-d5b5b67b4-pq4w2\" (UID: \"0c9b633c-3813-4953-ac35-15257be56af7\") " pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.881909 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8d9d\" (UniqueName: \"kubernetes.io/projected/0c9b633c-3813-4953-ac35-15257be56af7-kube-api-access-b8d9d\") pod \"controller-manager-d5b5b67b4-pq4w2\" (UID: \"0c9b633c-3813-4953-ac35-15257be56af7\") " pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.881960 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c9b633c-3813-4953-ac35-15257be56af7-proxy-ca-bundles\") pod \"controller-manager-d5b5b67b4-pq4w2\" (UID: \"0c9b633c-3813-4953-ac35-15257be56af7\") " pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.881980 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c9b633c-3813-4953-ac35-15257be56af7-serving-cert\") pod \"controller-manager-d5b5b67b4-pq4w2\" (UID: \"0c9b633c-3813-4953-ac35-15257be56af7\") " pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.882032 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c9b633c-3813-4953-ac35-15257be56af7-config\") pod \"controller-manager-d5b5b67b4-pq4w2\" (UID: \"0c9b633c-3813-4953-ac35-15257be56af7\") " pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.883275 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c9b633c-3813-4953-ac35-15257be56af7-client-ca\") pod \"controller-manager-d5b5b67b4-pq4w2\" (UID: \"0c9b633c-3813-4953-ac35-15257be56af7\") " pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.883644 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c9b633c-3813-4953-ac35-15257be56af7-config\") pod \"controller-manager-d5b5b67b4-pq4w2\" (UID: \"0c9b633c-3813-4953-ac35-15257be56af7\") " pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.883961 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c9b633c-3813-4953-ac35-15257be56af7-proxy-ca-bundles\") pod \"controller-manager-d5b5b67b4-pq4w2\" (UID: \"0c9b633c-3813-4953-ac35-15257be56af7\") " pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.886297 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c9b633c-3813-4953-ac35-15257be56af7-serving-cert\") pod \"controller-manager-d5b5b67b4-pq4w2\" (UID: \"0c9b633c-3813-4953-ac35-15257be56af7\") " pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.898479 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8d9d\" (UniqueName: \"kubernetes.io/projected/0c9b633c-3813-4953-ac35-15257be56af7-kube-api-access-b8d9d\") pod \"controller-manager-d5b5b67b4-pq4w2\" (UID: \"0c9b633c-3813-4953-ac35-15257be56af7\") " pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.903506 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27e783bc-5cb0-428a-a977-c1eb7b833a26" path="/var/lib/kubelet/pods/27e783bc-5cb0-428a-a977-c1eb7b833a26/volumes" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.904842 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9359744b-1f23-4dc4-ab3d-485214d347e5" path="/var/lib/kubelet/pods/9359744b-1f23-4dc4-ab3d-485214d347e5/volumes" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.967258 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" Nov 24 11:20:17 crc kubenswrapper[4678]: I1124 11:20:17.989820 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:18 crc kubenswrapper[4678]: I1124 11:20:18.272499 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2"] Nov 24 11:20:18 crc kubenswrapper[4678]: I1124 11:20:18.334255 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" event={"ID":"0c9b633c-3813-4953-ac35-15257be56af7","Type":"ContainerStarted","Data":"d23bd9585491141a5842b89291f0a9a6dac38950dad94ea60be8954957375cfc"} Nov 24 11:20:18 crc kubenswrapper[4678]: I1124 11:20:18.404478 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482"] Nov 24 11:20:18 crc kubenswrapper[4678]: W1124 11:20:18.411453 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podded4510e_5378_4756_b1f4_c8b8fb801003.slice/crio-31b6b197cd224ce6de57a1aa8172ab1015cca527e5d56e211c10d72c047c6141 WatchSource:0}: Error finding container 31b6b197cd224ce6de57a1aa8172ab1015cca527e5d56e211c10d72c047c6141: Status 404 returned error can't find the container with id 31b6b197cd224ce6de57a1aa8172ab1015cca527e5d56e211c10d72c047c6141 Nov 24 11:20:19 crc kubenswrapper[4678]: I1124 11:20:19.350119 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" event={"ID":"ded4510e-5378-4756-b1f4-c8b8fb801003","Type":"ContainerStarted","Data":"a4ecf2f2fcc8a4758d667ba71c687cb5e6edefa2ccceeb10964da3a1b8755738"} Nov 24 11:20:19 crc kubenswrapper[4678]: I1124 11:20:19.350802 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" event={"ID":"ded4510e-5378-4756-b1f4-c8b8fb801003","Type":"ContainerStarted","Data":"31b6b197cd224ce6de57a1aa8172ab1015cca527e5d56e211c10d72c047c6141"} Nov 24 11:20:19 crc kubenswrapper[4678]: I1124 11:20:19.351439 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" Nov 24 11:20:19 crc kubenswrapper[4678]: I1124 11:20:19.355822 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" event={"ID":"0c9b633c-3813-4953-ac35-15257be56af7","Type":"ContainerStarted","Data":"f22a24517b0cf860f49674337542db727a1d4794865189eb50a4af8c6febe8fe"} Nov 24 11:20:19 crc kubenswrapper[4678]: I1124 11:20:19.356481 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:19 crc kubenswrapper[4678]: I1124 11:20:19.362239 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" Nov 24 11:20:19 crc kubenswrapper[4678]: I1124 11:20:19.364946 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" Nov 24 11:20:19 crc kubenswrapper[4678]: I1124 11:20:19.383558 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7487f488dc-2w482" podStartSLOduration=3.383534724 podStartE2EDuration="3.383534724s" podCreationTimestamp="2025-11-24 11:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:20:19.378552578 +0000 UTC m=+230.309612227" watchObservedRunningTime="2025-11-24 11:20:19.383534724 +0000 UTC m=+230.314594383" Nov 24 11:20:19 crc kubenswrapper[4678]: I1124 11:20:19.424525 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d5b5b67b4-pq4w2" podStartSLOduration=3.42449645 podStartE2EDuration="3.42449645s" podCreationTimestamp="2025-11-24 11:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:20:19.423649856 +0000 UTC m=+230.354709495" watchObservedRunningTime="2025-11-24 11:20:19.42449645 +0000 UTC m=+230.355556089" Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.296553 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4sj65"] Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.297986 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4sj65" podUID="cdd6866d-2d7f-4bf4-aff4-461ed0c90347" containerName="registry-server" containerID="cri-o://7439eef84d57ded7389c9b3c99e71413a5e135a19b6f2c22e52b5bc231de91e1" gracePeriod=30 Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.310306 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nwmqj"] Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.310662 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nwmqj" podUID="c163752f-4564-4b60-b043-fe767dad40e4" containerName="registry-server" containerID="cri-o://310c242b9df178c44a32f5f06bd62c3ece42d7d0b7861e1ae3942d50a44be652" gracePeriod=30 Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.314396 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bdcv5"] Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.314764 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" podUID="83dee7d1-b6d5-4c51-9b88-84e4d35fe70b" containerName="marketplace-operator" containerID="cri-o://7f0543476d371c0e0cc91fe8a57cda49d205661a390c3546503957abd47b7b26" gracePeriod=30 Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.334999 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pqwhj"] Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.335343 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pqwhj" podUID="439a408b-a1ff-4517-b9b9-31902c9831da" containerName="registry-server" containerID="cri-o://0c80a9d6d861b2153d90e8bf131db00231466cbc3be4995f125036b16e9401c1" gracePeriod=30 Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.344974 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c2hc5"] Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.346962 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c2hc5" Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.370079 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9v4tq"] Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.370486 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9v4tq" podUID="5ccc31ba-4304-484e-b824-42c6910e59cd" containerName="registry-server" containerID="cri-o://9f8b3772222103f29d5b8085784f6360e1c876b0aae000ba6414fe448a22e1a9" gracePeriod=30 Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.377728 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c2hc5"] Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.415037 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f1b87f9-72ea-4db7-a016-17d109b58413-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c2hc5\" (UID: \"0f1b87f9-72ea-4db7-a016-17d109b58413\") " pod="openshift-marketplace/marketplace-operator-79b997595-c2hc5" Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.415115 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f1b87f9-72ea-4db7-a016-17d109b58413-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c2hc5\" (UID: \"0f1b87f9-72ea-4db7-a016-17d109b58413\") " pod="openshift-marketplace/marketplace-operator-79b997595-c2hc5" Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.415158 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w96gw\" (UniqueName: \"kubernetes.io/projected/0f1b87f9-72ea-4db7-a016-17d109b58413-kube-api-access-w96gw\") pod \"marketplace-operator-79b997595-c2hc5\" (UID: \"0f1b87f9-72ea-4db7-a016-17d109b58413\") " pod="openshift-marketplace/marketplace-operator-79b997595-c2hc5" Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.515824 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f1b87f9-72ea-4db7-a016-17d109b58413-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c2hc5\" (UID: \"0f1b87f9-72ea-4db7-a016-17d109b58413\") " pod="openshift-marketplace/marketplace-operator-79b997595-c2hc5" Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.515904 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w96gw\" (UniqueName: \"kubernetes.io/projected/0f1b87f9-72ea-4db7-a016-17d109b58413-kube-api-access-w96gw\") pod \"marketplace-operator-79b997595-c2hc5\" (UID: \"0f1b87f9-72ea-4db7-a016-17d109b58413\") " pod="openshift-marketplace/marketplace-operator-79b997595-c2hc5" Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.515944 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f1b87f9-72ea-4db7-a016-17d109b58413-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c2hc5\" (UID: \"0f1b87f9-72ea-4db7-a016-17d109b58413\") " pod="openshift-marketplace/marketplace-operator-79b997595-c2hc5" Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.517507 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f1b87f9-72ea-4db7-a016-17d109b58413-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c2hc5\" (UID: \"0f1b87f9-72ea-4db7-a016-17d109b58413\") " pod="openshift-marketplace/marketplace-operator-79b997595-c2hc5" Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.525293 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f1b87f9-72ea-4db7-a016-17d109b58413-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c2hc5\" (UID: \"0f1b87f9-72ea-4db7-a016-17d109b58413\") " pod="openshift-marketplace/marketplace-operator-79b997595-c2hc5" Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.533750 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w96gw\" (UniqueName: \"kubernetes.io/projected/0f1b87f9-72ea-4db7-a016-17d109b58413-kube-api-access-w96gw\") pod \"marketplace-operator-79b997595-c2hc5\" (UID: \"0f1b87f9-72ea-4db7-a016-17d109b58413\") " pod="openshift-marketplace/marketplace-operator-79b997595-c2hc5" Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.547077 4678 generic.go:334] "Generic (PLEG): container finished" podID="cdd6866d-2d7f-4bf4-aff4-461ed0c90347" containerID="7439eef84d57ded7389c9b3c99e71413a5e135a19b6f2c22e52b5bc231de91e1" exitCode=0 Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.547542 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sj65" event={"ID":"cdd6866d-2d7f-4bf4-aff4-461ed0c90347","Type":"ContainerDied","Data":"7439eef84d57ded7389c9b3c99e71413a5e135a19b6f2c22e52b5bc231de91e1"} Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.549635 4678 generic.go:334] "Generic (PLEG): container finished" podID="83dee7d1-b6d5-4c51-9b88-84e4d35fe70b" containerID="7f0543476d371c0e0cc91fe8a57cda49d205661a390c3546503957abd47b7b26" exitCode=0 Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.549702 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" event={"ID":"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b","Type":"ContainerDied","Data":"7f0543476d371c0e0cc91fe8a57cda49d205661a390c3546503957abd47b7b26"} Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.551982 4678 generic.go:334] "Generic (PLEG): container finished" podID="439a408b-a1ff-4517-b9b9-31902c9831da" containerID="0c80a9d6d861b2153d90e8bf131db00231466cbc3be4995f125036b16e9401c1" exitCode=0 Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.552032 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqwhj" event={"ID":"439a408b-a1ff-4517-b9b9-31902c9831da","Type":"ContainerDied","Data":"0c80a9d6d861b2153d90e8bf131db00231466cbc3be4995f125036b16e9401c1"} Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.559506 4678 generic.go:334] "Generic (PLEG): container finished" podID="c163752f-4564-4b60-b043-fe767dad40e4" containerID="310c242b9df178c44a32f5f06bd62c3ece42d7d0b7861e1ae3942d50a44be652" exitCode=0 Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.559601 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwmqj" event={"ID":"c163752f-4564-4b60-b043-fe767dad40e4","Type":"ContainerDied","Data":"310c242b9df178c44a32f5f06bd62c3ece42d7d0b7861e1ae3942d50a44be652"} Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.562565 4678 generic.go:334] "Generic (PLEG): container finished" podID="5ccc31ba-4304-484e-b824-42c6910e59cd" containerID="9f8b3772222103f29d5b8085784f6360e1c876b0aae000ba6414fe448a22e1a9" exitCode=0 Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.562591 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v4tq" event={"ID":"5ccc31ba-4304-484e-b824-42c6910e59cd","Type":"ContainerDied","Data":"9f8b3772222103f29d5b8085784f6360e1c876b0aae000ba6414fe448a22e1a9"} Nov 24 11:20:40 crc kubenswrapper[4678]: E1124 11:20:40.758767 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7439eef84d57ded7389c9b3c99e71413a5e135a19b6f2c22e52b5bc231de91e1 is running failed: container process not found" containerID="7439eef84d57ded7389c9b3c99e71413a5e135a19b6f2c22e52b5bc231de91e1" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 11:20:40 crc kubenswrapper[4678]: E1124 11:20:40.765418 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7439eef84d57ded7389c9b3c99e71413a5e135a19b6f2c22e52b5bc231de91e1 is running failed: container process not found" containerID="7439eef84d57ded7389c9b3c99e71413a5e135a19b6f2c22e52b5bc231de91e1" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 11:20:40 crc kubenswrapper[4678]: E1124 11:20:40.766301 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7439eef84d57ded7389c9b3c99e71413a5e135a19b6f2c22e52b5bc231de91e1 is running failed: container process not found" containerID="7439eef84d57ded7389c9b3c99e71413a5e135a19b6f2c22e52b5bc231de91e1" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 11:20:40 crc kubenswrapper[4678]: E1124 11:20:40.766400 4678 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7439eef84d57ded7389c9b3c99e71413a5e135a19b6f2c22e52b5bc231de91e1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-4sj65" podUID="cdd6866d-2d7f-4bf4-aff4-461ed0c90347" containerName="registry-server" Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.818429 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c2hc5" Nov 24 11:20:40 crc kubenswrapper[4678]: E1124 11:20:40.900508 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 310c242b9df178c44a32f5f06bd62c3ece42d7d0b7861e1ae3942d50a44be652 is running failed: container process not found" containerID="310c242b9df178c44a32f5f06bd62c3ece42d7d0b7861e1ae3942d50a44be652" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 11:20:40 crc kubenswrapper[4678]: E1124 11:20:40.901115 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 310c242b9df178c44a32f5f06bd62c3ece42d7d0b7861e1ae3942d50a44be652 is running failed: container process not found" containerID="310c242b9df178c44a32f5f06bd62c3ece42d7d0b7861e1ae3942d50a44be652" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 11:20:40 crc kubenswrapper[4678]: E1124 11:20:40.901517 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 310c242b9df178c44a32f5f06bd62c3ece42d7d0b7861e1ae3942d50a44be652 is running failed: container process not found" containerID="310c242b9df178c44a32f5f06bd62c3ece42d7d0b7861e1ae3942d50a44be652" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 11:20:40 crc kubenswrapper[4678]: E1124 11:20:40.901559 4678 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 310c242b9df178c44a32f5f06bd62c3ece42d7d0b7861e1ae3942d50a44be652 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-nwmqj" podUID="c163752f-4564-4b60-b043-fe767dad40e4" containerName="registry-server" Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.916169 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.921265 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/439a408b-a1ff-4517-b9b9-31902c9831da-utilities\") pod \"439a408b-a1ff-4517-b9b9-31902c9831da\" (UID: \"439a408b-a1ff-4517-b9b9-31902c9831da\") " Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.921308 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/439a408b-a1ff-4517-b9b9-31902c9831da-catalog-content\") pod \"439a408b-a1ff-4517-b9b9-31902c9831da\" (UID: \"439a408b-a1ff-4517-b9b9-31902c9831da\") " Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.921409 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5nr9\" (UniqueName: \"kubernetes.io/projected/439a408b-a1ff-4517-b9b9-31902c9831da-kube-api-access-d5nr9\") pod \"439a408b-a1ff-4517-b9b9-31902c9831da\" (UID: \"439a408b-a1ff-4517-b9b9-31902c9831da\") " Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.922734 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/439a408b-a1ff-4517-b9b9-31902c9831da-utilities" (OuterVolumeSpecName: "utilities") pod "439a408b-a1ff-4517-b9b9-31902c9831da" (UID: "439a408b-a1ff-4517-b9b9-31902c9831da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.929474 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/439a408b-a1ff-4517-b9b9-31902c9831da-kube-api-access-d5nr9" (OuterVolumeSpecName: "kube-api-access-d5nr9") pod "439a408b-a1ff-4517-b9b9-31902c9831da" (UID: "439a408b-a1ff-4517-b9b9-31902c9831da"). InnerVolumeSpecName "kube-api-access-d5nr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:20:40 crc kubenswrapper[4678]: I1124 11:20:40.949702 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/439a408b-a1ff-4517-b9b9-31902c9831da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "439a408b-a1ff-4517-b9b9-31902c9831da" (UID: "439a408b-a1ff-4517-b9b9-31902c9831da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.023299 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/439a408b-a1ff-4517-b9b9-31902c9831da-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.023347 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/439a408b-a1ff-4517-b9b9-31902c9831da-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.023366 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5nr9\" (UniqueName: \"kubernetes.io/projected/439a408b-a1ff-4517-b9b9-31902c9831da-kube-api-access-d5nr9\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.157737 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.162617 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.174927 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.203341 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.228063 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4p8p\" (UniqueName: \"kubernetes.io/projected/c163752f-4564-4b60-b043-fe767dad40e4-kube-api-access-l4p8p\") pod \"c163752f-4564-4b60-b043-fe767dad40e4\" (UID: \"c163752f-4564-4b60-b043-fe767dad40e4\") " Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.228123 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgmsc\" (UniqueName: \"kubernetes.io/projected/5ccc31ba-4304-484e-b824-42c6910e59cd-kube-api-access-wgmsc\") pod \"5ccc31ba-4304-484e-b824-42c6910e59cd\" (UID: \"5ccc31ba-4304-484e-b824-42c6910e59cd\") " Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.228171 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-utilities\") pod \"cdd6866d-2d7f-4bf4-aff4-461ed0c90347\" (UID: \"cdd6866d-2d7f-4bf4-aff4-461ed0c90347\") " Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.228213 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9gwl\" (UniqueName: \"kubernetes.io/projected/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-kube-api-access-r9gwl\") pod \"cdd6866d-2d7f-4bf4-aff4-461ed0c90347\" (UID: \"cdd6866d-2d7f-4bf4-aff4-461ed0c90347\") " Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.228251 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c163752f-4564-4b60-b043-fe767dad40e4-utilities\") pod \"c163752f-4564-4b60-b043-fe767dad40e4\" (UID: \"c163752f-4564-4b60-b043-fe767dad40e4\") " Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.228314 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-marketplace-operator-metrics\") pod \"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b\" (UID: \"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b\") " Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.228358 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-marketplace-trusted-ca\") pod \"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b\" (UID: \"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b\") " Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.228402 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-catalog-content\") pod \"cdd6866d-2d7f-4bf4-aff4-461ed0c90347\" (UID: \"cdd6866d-2d7f-4bf4-aff4-461ed0c90347\") " Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.228445 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ccc31ba-4304-484e-b824-42c6910e59cd-catalog-content\") pod \"5ccc31ba-4304-484e-b824-42c6910e59cd\" (UID: \"5ccc31ba-4304-484e-b824-42c6910e59cd\") " Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.228505 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfshk\" (UniqueName: \"kubernetes.io/projected/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-kube-api-access-mfshk\") pod \"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b\" (UID: \"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b\") " Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.228556 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ccc31ba-4304-484e-b824-42c6910e59cd-utilities\") pod \"5ccc31ba-4304-484e-b824-42c6910e59cd\" (UID: \"5ccc31ba-4304-484e-b824-42c6910e59cd\") " Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.228595 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c163752f-4564-4b60-b043-fe767dad40e4-catalog-content\") pod \"c163752f-4564-4b60-b043-fe767dad40e4\" (UID: \"c163752f-4564-4b60-b043-fe767dad40e4\") " Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.244356 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-utilities" (OuterVolumeSpecName: "utilities") pod "cdd6866d-2d7f-4bf4-aff4-461ed0c90347" (UID: "cdd6866d-2d7f-4bf4-aff4-461ed0c90347"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.244855 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "83dee7d1-b6d5-4c51-9b88-84e4d35fe70b" (UID: "83dee7d1-b6d5-4c51-9b88-84e4d35fe70b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.245250 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "83dee7d1-b6d5-4c51-9b88-84e4d35fe70b" (UID: "83dee7d1-b6d5-4c51-9b88-84e4d35fe70b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.245443 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-kube-api-access-mfshk" (OuterVolumeSpecName: "kube-api-access-mfshk") pod "83dee7d1-b6d5-4c51-9b88-84e4d35fe70b" (UID: "83dee7d1-b6d5-4c51-9b88-84e4d35fe70b"). InnerVolumeSpecName "kube-api-access-mfshk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.250818 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c163752f-4564-4b60-b043-fe767dad40e4-utilities" (OuterVolumeSpecName: "utilities") pod "c163752f-4564-4b60-b043-fe767dad40e4" (UID: "c163752f-4564-4b60-b043-fe767dad40e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.242053 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c163752f-4564-4b60-b043-fe767dad40e4-kube-api-access-l4p8p" (OuterVolumeSpecName: "kube-api-access-l4p8p") pod "c163752f-4564-4b60-b043-fe767dad40e4" (UID: "c163752f-4564-4b60-b043-fe767dad40e4"). InnerVolumeSpecName "kube-api-access-l4p8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.253526 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-kube-api-access-r9gwl" (OuterVolumeSpecName: "kube-api-access-r9gwl") pod "cdd6866d-2d7f-4bf4-aff4-461ed0c90347" (UID: "cdd6866d-2d7f-4bf4-aff4-461ed0c90347"). InnerVolumeSpecName "kube-api-access-r9gwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.253695 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ccc31ba-4304-484e-b824-42c6910e59cd-utilities" (OuterVolumeSpecName: "utilities") pod "5ccc31ba-4304-484e-b824-42c6910e59cd" (UID: "5ccc31ba-4304-484e-b824-42c6910e59cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.260333 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ccc31ba-4304-484e-b824-42c6910e59cd-kube-api-access-wgmsc" (OuterVolumeSpecName: "kube-api-access-wgmsc") pod "5ccc31ba-4304-484e-b824-42c6910e59cd" (UID: "5ccc31ba-4304-484e-b824-42c6910e59cd"). InnerVolumeSpecName "kube-api-access-wgmsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.318410 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cdd6866d-2d7f-4bf4-aff4-461ed0c90347" (UID: "cdd6866d-2d7f-4bf4-aff4-461ed0c90347"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.330489 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfshk\" (UniqueName: \"kubernetes.io/projected/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-kube-api-access-mfshk\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.330548 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ccc31ba-4304-484e-b824-42c6910e59cd-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.330569 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4p8p\" (UniqueName: \"kubernetes.io/projected/c163752f-4564-4b60-b043-fe767dad40e4-kube-api-access-l4p8p\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.330582 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgmsc\" (UniqueName: \"kubernetes.io/projected/5ccc31ba-4304-484e-b824-42c6910e59cd-kube-api-access-wgmsc\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.330596 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.330608 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9gwl\" (UniqueName: \"kubernetes.io/projected/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-kube-api-access-r9gwl\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.330618 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c163752f-4564-4b60-b043-fe767dad40e4-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.330632 4678 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.330647 4678 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.330658 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd6866d-2d7f-4bf4-aff4-461ed0c90347-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.337060 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c163752f-4564-4b60-b043-fe767dad40e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c163752f-4564-4b60-b043-fe767dad40e4" (UID: "c163752f-4564-4b60-b043-fe767dad40e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.373637 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ccc31ba-4304-484e-b824-42c6910e59cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ccc31ba-4304-484e-b824-42c6910e59cd" (UID: "5ccc31ba-4304-484e-b824-42c6910e59cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.432859 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c163752f-4564-4b60-b043-fe767dad40e4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.432944 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ccc31ba-4304-484e-b824-42c6910e59cd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.458046 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c2hc5"] Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.572128 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pqwhj" event={"ID":"439a408b-a1ff-4517-b9b9-31902c9831da","Type":"ContainerDied","Data":"a93cc8154d57205ab87bcb4db88ec262d4b4310a63cb4f76ae37624d01b4a035"} Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.572184 4678 scope.go:117] "RemoveContainer" containerID="0c80a9d6d861b2153d90e8bf131db00231466cbc3be4995f125036b16e9401c1" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.572314 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pqwhj" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.580075 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwmqj" event={"ID":"c163752f-4564-4b60-b043-fe767dad40e4","Type":"ContainerDied","Data":"8bf6e7cf1d78b141093e585b643d1a12cafb3f739f18d279287b53a21056d678"} Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.580282 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwmqj" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.582853 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9v4tq" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.582860 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v4tq" event={"ID":"5ccc31ba-4304-484e-b824-42c6910e59cd","Type":"ContainerDied","Data":"933fecf2c0342ce2253b9e012aa20eb7bb04bbea35c7f76387dcb2d316f70cad"} Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.591799 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sj65" event={"ID":"cdd6866d-2d7f-4bf4-aff4-461ed0c90347","Type":"ContainerDied","Data":"c6d43726205764634f0a8467a6e0d4a5e3ba62a03aa72fb641fb53215c4398e6"} Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.592001 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sj65" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.599106 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" event={"ID":"83dee7d1-b6d5-4c51-9b88-84e4d35fe70b","Type":"ContainerDied","Data":"fe1a1e3da06157b9ec2f45ef28000cea8af335c05c22929784a9c507f3830139"} Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.599267 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bdcv5" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.604476 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c2hc5" event={"ID":"0f1b87f9-72ea-4db7-a016-17d109b58413","Type":"ContainerStarted","Data":"823ba8ec9715e3ea159c35ab33e2b38f40a6e129df6df601d260a9518f4410b0"} Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.610396 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pqwhj"] Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.610639 4678 scope.go:117] "RemoveContainer" containerID="9874fc0349044a0622b2b75ce587b8a8ddd7385735dd3fd0829b4cc03ccdb04e" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.613711 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pqwhj"] Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.635127 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nwmqj"] Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.637110 4678 scope.go:117] "RemoveContainer" containerID="7ee18d07b3a3e8a005180e7dbb22b088bbb2bac6b293b159157c998b597101ca" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.640216 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nwmqj"] Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.658883 4678 scope.go:117] "RemoveContainer" containerID="310c242b9df178c44a32f5f06bd62c3ece42d7d0b7861e1ae3942d50a44be652" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.659053 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9v4tq"] Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.663575 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9v4tq"] Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.692367 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4sj65"] Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.698651 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4sj65"] Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.709161 4678 scope.go:117] "RemoveContainer" containerID="05d12a35dc660692b80b0217bf58f2a58dba893ae46c7960f020403eb12c15f7" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.712084 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bdcv5"] Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.717068 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bdcv5"] Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.725823 4678 scope.go:117] "RemoveContainer" containerID="9587f1542c8d3834ba03f225e95bce24419756cc7ee645c852e88c22fb63e927" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.742231 4678 scope.go:117] "RemoveContainer" containerID="9f8b3772222103f29d5b8085784f6360e1c876b0aae000ba6414fe448a22e1a9" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.757252 4678 scope.go:117] "RemoveContainer" containerID="64170346bd4885bee54b1c59dfd0390a5795a1a222f95fe06ee452eba1e86ee7" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.769591 4678 scope.go:117] "RemoveContainer" containerID="f1a77dff05214dacaf8020d5076ae251abf85a303c365b48596a0869349aaad6" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.782802 4678 scope.go:117] "RemoveContainer" containerID="7439eef84d57ded7389c9b3c99e71413a5e135a19b6f2c22e52b5bc231de91e1" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.799042 4678 scope.go:117] "RemoveContainer" containerID="23573a7669cd6ec661b03fc67828d3d8e10b049fbbcdf7729276e4c475a381bd" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.819179 4678 scope.go:117] "RemoveContainer" containerID="306e30e2214a90c830a707f37aafa488aa8c54516c6844941415cbe983ebe0a4" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.834789 4678 scope.go:117] "RemoveContainer" containerID="7f0543476d371c0e0cc91fe8a57cda49d205661a390c3546503957abd47b7b26" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.901658 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="439a408b-a1ff-4517-b9b9-31902c9831da" path="/var/lib/kubelet/pods/439a408b-a1ff-4517-b9b9-31902c9831da/volumes" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.902346 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ccc31ba-4304-484e-b824-42c6910e59cd" path="/var/lib/kubelet/pods/5ccc31ba-4304-484e-b824-42c6910e59cd/volumes" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.902976 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83dee7d1-b6d5-4c51-9b88-84e4d35fe70b" path="/var/lib/kubelet/pods/83dee7d1-b6d5-4c51-9b88-84e4d35fe70b/volumes" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.903878 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c163752f-4564-4b60-b043-fe767dad40e4" path="/var/lib/kubelet/pods/c163752f-4564-4b60-b043-fe767dad40e4/volumes" Nov 24 11:20:41 crc kubenswrapper[4678]: I1124 11:20:41.904434 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd6866d-2d7f-4bf4-aff4-461ed0c90347" path="/var/lib/kubelet/pods/cdd6866d-2d7f-4bf4-aff4-461ed0c90347/volumes" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.510556 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wk9rl"] Nov 24 11:20:42 crc kubenswrapper[4678]: E1124 11:20:42.510930 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439a408b-a1ff-4517-b9b9-31902c9831da" containerName="extract-content" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.510948 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="439a408b-a1ff-4517-b9b9-31902c9831da" containerName="extract-content" Nov 24 11:20:42 crc kubenswrapper[4678]: E1124 11:20:42.510962 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83dee7d1-b6d5-4c51-9b88-84e4d35fe70b" containerName="marketplace-operator" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.510970 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="83dee7d1-b6d5-4c51-9b88-84e4d35fe70b" containerName="marketplace-operator" Nov 24 11:20:42 crc kubenswrapper[4678]: E1124 11:20:42.510982 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ccc31ba-4304-484e-b824-42c6910e59cd" containerName="registry-server" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.510990 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ccc31ba-4304-484e-b824-42c6910e59cd" containerName="registry-server" Nov 24 11:20:42 crc kubenswrapper[4678]: E1124 11:20:42.511005 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ccc31ba-4304-484e-b824-42c6910e59cd" containerName="extract-content" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.511012 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ccc31ba-4304-484e-b824-42c6910e59cd" containerName="extract-content" Nov 24 11:20:42 crc kubenswrapper[4678]: E1124 11:20:42.511024 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c163752f-4564-4b60-b043-fe767dad40e4" containerName="extract-content" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.511031 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c163752f-4564-4b60-b043-fe767dad40e4" containerName="extract-content" Nov 24 11:20:42 crc kubenswrapper[4678]: E1124 11:20:42.511043 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ccc31ba-4304-484e-b824-42c6910e59cd" containerName="extract-utilities" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.511051 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ccc31ba-4304-484e-b824-42c6910e59cd" containerName="extract-utilities" Nov 24 11:20:42 crc kubenswrapper[4678]: E1124 11:20:42.511064 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd6866d-2d7f-4bf4-aff4-461ed0c90347" containerName="extract-content" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.511071 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd6866d-2d7f-4bf4-aff4-461ed0c90347" containerName="extract-content" Nov 24 11:20:42 crc kubenswrapper[4678]: E1124 11:20:42.511084 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd6866d-2d7f-4bf4-aff4-461ed0c90347" containerName="extract-utilities" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.511092 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd6866d-2d7f-4bf4-aff4-461ed0c90347" containerName="extract-utilities" Nov 24 11:20:42 crc kubenswrapper[4678]: E1124 11:20:42.511105 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd6866d-2d7f-4bf4-aff4-461ed0c90347" containerName="registry-server" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.511113 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd6866d-2d7f-4bf4-aff4-461ed0c90347" containerName="registry-server" Nov 24 11:20:42 crc kubenswrapper[4678]: E1124 11:20:42.511123 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439a408b-a1ff-4517-b9b9-31902c9831da" containerName="extract-utilities" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.511130 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="439a408b-a1ff-4517-b9b9-31902c9831da" containerName="extract-utilities" Nov 24 11:20:42 crc kubenswrapper[4678]: E1124 11:20:42.511138 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c163752f-4564-4b60-b043-fe767dad40e4" containerName="extract-utilities" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.511146 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c163752f-4564-4b60-b043-fe767dad40e4" containerName="extract-utilities" Nov 24 11:20:42 crc kubenswrapper[4678]: E1124 11:20:42.511155 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439a408b-a1ff-4517-b9b9-31902c9831da" containerName="registry-server" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.511163 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="439a408b-a1ff-4517-b9b9-31902c9831da" containerName="registry-server" Nov 24 11:20:42 crc kubenswrapper[4678]: E1124 11:20:42.511173 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c163752f-4564-4b60-b043-fe767dad40e4" containerName="registry-server" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.511180 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c163752f-4564-4b60-b043-fe767dad40e4" containerName="registry-server" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.511291 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="83dee7d1-b6d5-4c51-9b88-84e4d35fe70b" containerName="marketplace-operator" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.511312 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="439a408b-a1ff-4517-b9b9-31902c9831da" containerName="registry-server" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.511322 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ccc31ba-4304-484e-b824-42c6910e59cd" containerName="registry-server" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.511333 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="c163752f-4564-4b60-b043-fe767dad40e4" containerName="registry-server" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.511343 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd6866d-2d7f-4bf4-aff4-461ed0c90347" containerName="registry-server" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.512222 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wk9rl" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.514568 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.519938 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wk9rl"] Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.552738 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c1aba28-e8ad-44c9-b67f-a82955ffd06c-catalog-content\") pod \"certified-operators-wk9rl\" (UID: \"3c1aba28-e8ad-44c9-b67f-a82955ffd06c\") " pod="openshift-marketplace/certified-operators-wk9rl" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.552793 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljwsn\" (UniqueName: \"kubernetes.io/projected/3c1aba28-e8ad-44c9-b67f-a82955ffd06c-kube-api-access-ljwsn\") pod \"certified-operators-wk9rl\" (UID: \"3c1aba28-e8ad-44c9-b67f-a82955ffd06c\") " pod="openshift-marketplace/certified-operators-wk9rl" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.553055 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c1aba28-e8ad-44c9-b67f-a82955ffd06c-utilities\") pod \"certified-operators-wk9rl\" (UID: \"3c1aba28-e8ad-44c9-b67f-a82955ffd06c\") " pod="openshift-marketplace/certified-operators-wk9rl" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.615203 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c2hc5" event={"ID":"0f1b87f9-72ea-4db7-a016-17d109b58413","Type":"ContainerStarted","Data":"8a118d90b66679caca9267020ab3dca5d4ae2f498edb16231905c72585e263d9"} Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.615702 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-c2hc5" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.619184 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-c2hc5" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.657211 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c1aba28-e8ad-44c9-b67f-a82955ffd06c-catalog-content\") pod \"certified-operators-wk9rl\" (UID: \"3c1aba28-e8ad-44c9-b67f-a82955ffd06c\") " pod="openshift-marketplace/certified-operators-wk9rl" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.657376 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljwsn\" (UniqueName: \"kubernetes.io/projected/3c1aba28-e8ad-44c9-b67f-a82955ffd06c-kube-api-access-ljwsn\") pod \"certified-operators-wk9rl\" (UID: \"3c1aba28-e8ad-44c9-b67f-a82955ffd06c\") " pod="openshift-marketplace/certified-operators-wk9rl" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.657508 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c1aba28-e8ad-44c9-b67f-a82955ffd06c-utilities\") pod \"certified-operators-wk9rl\" (UID: \"3c1aba28-e8ad-44c9-b67f-a82955ffd06c\") " pod="openshift-marketplace/certified-operators-wk9rl" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.658642 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c1aba28-e8ad-44c9-b67f-a82955ffd06c-catalog-content\") pod \"certified-operators-wk9rl\" (UID: \"3c1aba28-e8ad-44c9-b67f-a82955ffd06c\") " pod="openshift-marketplace/certified-operators-wk9rl" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.660195 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c1aba28-e8ad-44c9-b67f-a82955ffd06c-utilities\") pod \"certified-operators-wk9rl\" (UID: \"3c1aba28-e8ad-44c9-b67f-a82955ffd06c\") " pod="openshift-marketplace/certified-operators-wk9rl" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.687793 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljwsn\" (UniqueName: \"kubernetes.io/projected/3c1aba28-e8ad-44c9-b67f-a82955ffd06c-kube-api-access-ljwsn\") pod \"certified-operators-wk9rl\" (UID: \"3c1aba28-e8ad-44c9-b67f-a82955ffd06c\") " pod="openshift-marketplace/certified-operators-wk9rl" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.696484 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-c2hc5" podStartSLOduration=2.6964503779999998 podStartE2EDuration="2.696450378s" podCreationTimestamp="2025-11-24 11:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:20:42.66362266 +0000 UTC m=+253.594682309" watchObservedRunningTime="2025-11-24 11:20:42.696450378 +0000 UTC m=+253.627510027" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.745773 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-82hsh"] Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.755034 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82hsh" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.758194 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-82hsh"] Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.758979 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.835719 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wk9rl" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.860865 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xfsp\" (UniqueName: \"kubernetes.io/projected/2ed0e090-9ad7-42be-bfda-9c13a37fc1c7-kube-api-access-6xfsp\") pod \"redhat-marketplace-82hsh\" (UID: \"2ed0e090-9ad7-42be-bfda-9c13a37fc1c7\") " pod="openshift-marketplace/redhat-marketplace-82hsh" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.860921 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed0e090-9ad7-42be-bfda-9c13a37fc1c7-utilities\") pod \"redhat-marketplace-82hsh\" (UID: \"2ed0e090-9ad7-42be-bfda-9c13a37fc1c7\") " pod="openshift-marketplace/redhat-marketplace-82hsh" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.860991 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed0e090-9ad7-42be-bfda-9c13a37fc1c7-catalog-content\") pod \"redhat-marketplace-82hsh\" (UID: \"2ed0e090-9ad7-42be-bfda-9c13a37fc1c7\") " pod="openshift-marketplace/redhat-marketplace-82hsh" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.961840 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed0e090-9ad7-42be-bfda-9c13a37fc1c7-catalog-content\") pod \"redhat-marketplace-82hsh\" (UID: \"2ed0e090-9ad7-42be-bfda-9c13a37fc1c7\") " pod="openshift-marketplace/redhat-marketplace-82hsh" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.961903 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xfsp\" (UniqueName: \"kubernetes.io/projected/2ed0e090-9ad7-42be-bfda-9c13a37fc1c7-kube-api-access-6xfsp\") pod \"redhat-marketplace-82hsh\" (UID: \"2ed0e090-9ad7-42be-bfda-9c13a37fc1c7\") " pod="openshift-marketplace/redhat-marketplace-82hsh" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.961933 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed0e090-9ad7-42be-bfda-9c13a37fc1c7-utilities\") pod \"redhat-marketplace-82hsh\" (UID: \"2ed0e090-9ad7-42be-bfda-9c13a37fc1c7\") " pod="openshift-marketplace/redhat-marketplace-82hsh" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.962602 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed0e090-9ad7-42be-bfda-9c13a37fc1c7-utilities\") pod \"redhat-marketplace-82hsh\" (UID: \"2ed0e090-9ad7-42be-bfda-9c13a37fc1c7\") " pod="openshift-marketplace/redhat-marketplace-82hsh" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.962853 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed0e090-9ad7-42be-bfda-9c13a37fc1c7-catalog-content\") pod \"redhat-marketplace-82hsh\" (UID: \"2ed0e090-9ad7-42be-bfda-9c13a37fc1c7\") " pod="openshift-marketplace/redhat-marketplace-82hsh" Nov 24 11:20:42 crc kubenswrapper[4678]: I1124 11:20:42.984985 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xfsp\" (UniqueName: \"kubernetes.io/projected/2ed0e090-9ad7-42be-bfda-9c13a37fc1c7-kube-api-access-6xfsp\") pod \"redhat-marketplace-82hsh\" (UID: \"2ed0e090-9ad7-42be-bfda-9c13a37fc1c7\") " pod="openshift-marketplace/redhat-marketplace-82hsh" Nov 24 11:20:43 crc kubenswrapper[4678]: I1124 11:20:43.072510 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82hsh" Nov 24 11:20:43 crc kubenswrapper[4678]: I1124 11:20:43.249578 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wk9rl"] Nov 24 11:20:43 crc kubenswrapper[4678]: I1124 11:20:43.525941 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-82hsh"] Nov 24 11:20:43 crc kubenswrapper[4678]: W1124 11:20:43.543392 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ed0e090_9ad7_42be_bfda_9c13a37fc1c7.slice/crio-633252bd0694eb775d7b71577f4c82a78d190c1dfbe73558152cc7f903643e1d WatchSource:0}: Error finding container 633252bd0694eb775d7b71577f4c82a78d190c1dfbe73558152cc7f903643e1d: Status 404 returned error can't find the container with id 633252bd0694eb775d7b71577f4c82a78d190c1dfbe73558152cc7f903643e1d Nov 24 11:20:43 crc kubenswrapper[4678]: I1124 11:20:43.629140 4678 generic.go:334] "Generic (PLEG): container finished" podID="3c1aba28-e8ad-44c9-b67f-a82955ffd06c" containerID="83e22ffd841b7e6a7b74a6a28dd40d5dd462b12c23b80535ffde89b61dd8d8d0" exitCode=0 Nov 24 11:20:43 crc kubenswrapper[4678]: I1124 11:20:43.629221 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wk9rl" event={"ID":"3c1aba28-e8ad-44c9-b67f-a82955ffd06c","Type":"ContainerDied","Data":"83e22ffd841b7e6a7b74a6a28dd40d5dd462b12c23b80535ffde89b61dd8d8d0"} Nov 24 11:20:43 crc kubenswrapper[4678]: I1124 11:20:43.629638 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wk9rl" event={"ID":"3c1aba28-e8ad-44c9-b67f-a82955ffd06c","Type":"ContainerStarted","Data":"ca8671b702ceb6caa908403f97a186a0e5469caf1081bb35db6aa64e9b6d5dea"} Nov 24 11:20:43 crc kubenswrapper[4678]: I1124 11:20:43.631112 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82hsh" event={"ID":"2ed0e090-9ad7-42be-bfda-9c13a37fc1c7","Type":"ContainerStarted","Data":"633252bd0694eb775d7b71577f4c82a78d190c1dfbe73558152cc7f903643e1d"} Nov 24 11:20:44 crc kubenswrapper[4678]: I1124 11:20:44.641370 4678 generic.go:334] "Generic (PLEG): container finished" podID="2ed0e090-9ad7-42be-bfda-9c13a37fc1c7" containerID="126948db0210f6a8429264275bf38bdb2c0535f6c08c1f338817bdc38f51f2e3" exitCode=0 Nov 24 11:20:44 crc kubenswrapper[4678]: I1124 11:20:44.641463 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82hsh" event={"ID":"2ed0e090-9ad7-42be-bfda-9c13a37fc1c7","Type":"ContainerDied","Data":"126948db0210f6a8429264275bf38bdb2c0535f6c08c1f338817bdc38f51f2e3"} Nov 24 11:20:44 crc kubenswrapper[4678]: I1124 11:20:44.904297 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cr22z"] Nov 24 11:20:44 crc kubenswrapper[4678]: I1124 11:20:44.905806 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:20:44 crc kubenswrapper[4678]: I1124 11:20:44.908574 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 11:20:44 crc kubenswrapper[4678]: I1124 11:20:44.917610 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cr22z"] Nov 24 11:20:44 crc kubenswrapper[4678]: I1124 11:20:44.996741 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-catalog-content\") pod \"community-operators-cr22z\" (UID: \"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49\") " pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:20:44 crc kubenswrapper[4678]: I1124 11:20:44.996809 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-utilities\") pod \"community-operators-cr22z\" (UID: \"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49\") " pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:20:44 crc kubenswrapper[4678]: I1124 11:20:44.996859 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8rbr\" (UniqueName: \"kubernetes.io/projected/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-kube-api-access-m8rbr\") pod \"community-operators-cr22z\" (UID: \"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49\") " pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.098481 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-catalog-content\") pod \"community-operators-cr22z\" (UID: \"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49\") " pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.098554 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-utilities\") pod \"community-operators-cr22z\" (UID: \"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49\") " pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.098697 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8rbr\" (UniqueName: \"kubernetes.io/projected/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-kube-api-access-m8rbr\") pod \"community-operators-cr22z\" (UID: \"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49\") " pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.099310 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-catalog-content\") pod \"community-operators-cr22z\" (UID: \"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49\") " pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.099383 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-utilities\") pod \"community-operators-cr22z\" (UID: \"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49\") " pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.112537 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xtp9r"] Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.113775 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.121265 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.128478 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8rbr\" (UniqueName: \"kubernetes.io/projected/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-kube-api-access-m8rbr\") pod \"community-operators-cr22z\" (UID: \"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49\") " pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.131689 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtp9r"] Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.199868 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-utilities\") pod \"redhat-operators-xtp9r\" (UID: \"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1\") " pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.199972 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnkpb\" (UniqueName: \"kubernetes.io/projected/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-kube-api-access-gnkpb\") pod \"redhat-operators-xtp9r\" (UID: \"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1\") " pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.200015 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-catalog-content\") pod \"redhat-operators-xtp9r\" (UID: \"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1\") " pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.264866 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.301775 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-catalog-content\") pod \"redhat-operators-xtp9r\" (UID: \"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1\") " pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.301940 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-utilities\") pod \"redhat-operators-xtp9r\" (UID: \"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1\") " pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.302066 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnkpb\" (UniqueName: \"kubernetes.io/projected/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-kube-api-access-gnkpb\") pod \"redhat-operators-xtp9r\" (UID: \"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1\") " pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.302387 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-catalog-content\") pod \"redhat-operators-xtp9r\" (UID: \"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1\") " pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.302442 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-utilities\") pod \"redhat-operators-xtp9r\" (UID: \"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1\") " pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.323222 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnkpb\" (UniqueName: \"kubernetes.io/projected/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-kube-api-access-gnkpb\") pod \"redhat-operators-xtp9r\" (UID: \"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1\") " pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.430264 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.651379 4678 generic.go:334] "Generic (PLEG): container finished" podID="2ed0e090-9ad7-42be-bfda-9c13a37fc1c7" containerID="d8a0023e834255fe4ac72ca6c75bb56295f1376f6b79cf1cd3fd5eb7a6cff8d7" exitCode=0 Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.651560 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82hsh" event={"ID":"2ed0e090-9ad7-42be-bfda-9c13a37fc1c7","Type":"ContainerDied","Data":"d8a0023e834255fe4ac72ca6c75bb56295f1376f6b79cf1cd3fd5eb7a6cff8d7"} Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.668173 4678 generic.go:334] "Generic (PLEG): container finished" podID="3c1aba28-e8ad-44c9-b67f-a82955ffd06c" containerID="f91707ab66907be8f177609a5559a40e4e0f72b69b9541dc1c4c939efee92137" exitCode=0 Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.668237 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wk9rl" event={"ID":"3c1aba28-e8ad-44c9-b67f-a82955ffd06c","Type":"ContainerDied","Data":"f91707ab66907be8f177609a5559a40e4e0f72b69b9541dc1c4c939efee92137"} Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.693876 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cr22z"] Nov 24 11:20:45 crc kubenswrapper[4678]: I1124 11:20:45.831570 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtp9r"] Nov 24 11:20:45 crc kubenswrapper[4678]: W1124 11:20:45.876202 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26ff8c7f_bc62_4204_b23d_4e6844c3d3c1.slice/crio-014dee6956b229e47cbf822fcf0142daff99d9601d61f9d56aad2bf20be30326 WatchSource:0}: Error finding container 014dee6956b229e47cbf822fcf0142daff99d9601d61f9d56aad2bf20be30326: Status 404 returned error can't find the container with id 014dee6956b229e47cbf822fcf0142daff99d9601d61f9d56aad2bf20be30326 Nov 24 11:20:46 crc kubenswrapper[4678]: I1124 11:20:46.678043 4678 generic.go:334] "Generic (PLEG): container finished" podID="b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49" containerID="44c10337a75c885a001aca2a011c1ce20a23e79a6b3baf17c925363f943f366d" exitCode=0 Nov 24 11:20:46 crc kubenswrapper[4678]: I1124 11:20:46.678119 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr22z" event={"ID":"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49","Type":"ContainerDied","Data":"44c10337a75c885a001aca2a011c1ce20a23e79a6b3baf17c925363f943f366d"} Nov 24 11:20:46 crc kubenswrapper[4678]: I1124 11:20:46.680683 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr22z" event={"ID":"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49","Type":"ContainerStarted","Data":"e79a7d58297ca26ae253047b5b29ae76568dc76730b3592f192e5465bda2a391"} Nov 24 11:20:46 crc kubenswrapper[4678]: I1124 11:20:46.684935 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82hsh" event={"ID":"2ed0e090-9ad7-42be-bfda-9c13a37fc1c7","Type":"ContainerStarted","Data":"8bc7c498e5f236688cdea9b9e458cf2b5207628721216b69dcaf09786bb2a24b"} Nov 24 11:20:46 crc kubenswrapper[4678]: I1124 11:20:46.691509 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wk9rl" event={"ID":"3c1aba28-e8ad-44c9-b67f-a82955ffd06c","Type":"ContainerStarted","Data":"53c5369816c502c2b3df1949f49ae373ba4544684d0518b355251229da2b9c45"} Nov 24 11:20:46 crc kubenswrapper[4678]: I1124 11:20:46.693589 4678 generic.go:334] "Generic (PLEG): container finished" podID="26ff8c7f-bc62-4204-b23d-4e6844c3d3c1" containerID="845357932fcd26ca283abd0054b6ae298c07ffab09d7a16440d3a0a1bee6d16e" exitCode=0 Nov 24 11:20:46 crc kubenswrapper[4678]: I1124 11:20:46.693654 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtp9r" event={"ID":"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1","Type":"ContainerDied","Data":"845357932fcd26ca283abd0054b6ae298c07ffab09d7a16440d3a0a1bee6d16e"} Nov 24 11:20:46 crc kubenswrapper[4678]: I1124 11:20:46.693705 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtp9r" event={"ID":"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1","Type":"ContainerStarted","Data":"014dee6956b229e47cbf822fcf0142daff99d9601d61f9d56aad2bf20be30326"} Nov 24 11:20:46 crc kubenswrapper[4678]: I1124 11:20:46.724562 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-82hsh" podStartSLOduration=3.330133831 podStartE2EDuration="4.724537642s" podCreationTimestamp="2025-11-24 11:20:42 +0000 UTC" firstStartedPulling="2025-11-24 11:20:44.64422984 +0000 UTC m=+255.575289489" lastFinishedPulling="2025-11-24 11:20:46.038633661 +0000 UTC m=+256.969693300" observedRunningTime="2025-11-24 11:20:46.718567958 +0000 UTC m=+257.649627597" watchObservedRunningTime="2025-11-24 11:20:46.724537642 +0000 UTC m=+257.655597281" Nov 24 11:20:46 crc kubenswrapper[4678]: I1124 11:20:46.758608 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wk9rl" podStartSLOduration=2.264996586 podStartE2EDuration="4.758583297s" podCreationTimestamp="2025-11-24 11:20:42 +0000 UTC" firstStartedPulling="2025-11-24 11:20:43.631945128 +0000 UTC m=+254.563004767" lastFinishedPulling="2025-11-24 11:20:46.125531839 +0000 UTC m=+257.056591478" observedRunningTime="2025-11-24 11:20:46.756824625 +0000 UTC m=+257.687884264" watchObservedRunningTime="2025-11-24 11:20:46.758583297 +0000 UTC m=+257.689642936" Nov 24 11:20:47 crc kubenswrapper[4678]: I1124 11:20:47.702637 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtp9r" event={"ID":"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1","Type":"ContainerStarted","Data":"32959bcad638bd8b4a1c90451214281c9f1ed4a1e8afb3c3c7b639a523ca0e26"} Nov 24 11:20:47 crc kubenswrapper[4678]: I1124 11:20:47.704708 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr22z" event={"ID":"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49","Type":"ContainerStarted","Data":"71ec58f597d04c51b59e1904f8ce8ed066fda9f01074bcc32ae1073d866f3da8"} Nov 24 11:20:48 crc kubenswrapper[4678]: I1124 11:20:48.714016 4678 generic.go:334] "Generic (PLEG): container finished" podID="b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49" containerID="71ec58f597d04c51b59e1904f8ce8ed066fda9f01074bcc32ae1073d866f3da8" exitCode=0 Nov 24 11:20:48 crc kubenswrapper[4678]: I1124 11:20:48.714121 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr22z" event={"ID":"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49","Type":"ContainerDied","Data":"71ec58f597d04c51b59e1904f8ce8ed066fda9f01074bcc32ae1073d866f3da8"} Nov 24 11:20:48 crc kubenswrapper[4678]: I1124 11:20:48.734250 4678 generic.go:334] "Generic (PLEG): container finished" podID="26ff8c7f-bc62-4204-b23d-4e6844c3d3c1" containerID="32959bcad638bd8b4a1c90451214281c9f1ed4a1e8afb3c3c7b639a523ca0e26" exitCode=0 Nov 24 11:20:48 crc kubenswrapper[4678]: I1124 11:20:48.734321 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtp9r" event={"ID":"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1","Type":"ContainerDied","Data":"32959bcad638bd8b4a1c90451214281c9f1ed4a1e8afb3c3c7b639a523ca0e26"} Nov 24 11:20:49 crc kubenswrapper[4678]: I1124 11:20:49.745167 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr22z" event={"ID":"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49","Type":"ContainerStarted","Data":"2164ca1b6532dd53604245f66c8c2fd323373422b957a333eed670a45196c7e1"} Nov 24 11:20:49 crc kubenswrapper[4678]: I1124 11:20:49.749505 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtp9r" event={"ID":"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1","Type":"ContainerStarted","Data":"b24e288520beb31f463a32f7cd82ca5d335370a81343596549773e4278ce6737"} Nov 24 11:20:49 crc kubenswrapper[4678]: I1124 11:20:49.767437 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cr22z" podStartSLOduration=3.327161351 podStartE2EDuration="5.767415285s" podCreationTimestamp="2025-11-24 11:20:44 +0000 UTC" firstStartedPulling="2025-11-24 11:20:46.679993961 +0000 UTC m=+257.611053600" lastFinishedPulling="2025-11-24 11:20:49.120247895 +0000 UTC m=+260.051307534" observedRunningTime="2025-11-24 11:20:49.765081506 +0000 UTC m=+260.696141145" watchObservedRunningTime="2025-11-24 11:20:49.767415285 +0000 UTC m=+260.698474924" Nov 24 11:20:49 crc kubenswrapper[4678]: I1124 11:20:49.786524 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xtp9r" podStartSLOduration=1.967868169 podStartE2EDuration="4.786500182s" podCreationTimestamp="2025-11-24 11:20:45 +0000 UTC" firstStartedPulling="2025-11-24 11:20:46.70083908 +0000 UTC m=+257.631898719" lastFinishedPulling="2025-11-24 11:20:49.519471093 +0000 UTC m=+260.450530732" observedRunningTime="2025-11-24 11:20:49.78164858 +0000 UTC m=+260.712708239" watchObservedRunningTime="2025-11-24 11:20:49.786500182 +0000 UTC m=+260.717559821" Nov 24 11:20:52 crc kubenswrapper[4678]: I1124 11:20:52.837723 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wk9rl" Nov 24 11:20:52 crc kubenswrapper[4678]: I1124 11:20:52.838660 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wk9rl" Nov 24 11:20:52 crc kubenswrapper[4678]: I1124 11:20:52.918271 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wk9rl" Nov 24 11:20:53 crc kubenswrapper[4678]: I1124 11:20:53.073385 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-82hsh" Nov 24 11:20:53 crc kubenswrapper[4678]: I1124 11:20:53.073473 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-82hsh" Nov 24 11:20:53 crc kubenswrapper[4678]: I1124 11:20:53.126094 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-82hsh" Nov 24 11:20:53 crc kubenswrapper[4678]: I1124 11:20:53.820803 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wk9rl" Nov 24 11:20:53 crc kubenswrapper[4678]: I1124 11:20:53.827229 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-82hsh" Nov 24 11:20:55 crc kubenswrapper[4678]: I1124 11:20:55.265418 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:20:55 crc kubenswrapper[4678]: I1124 11:20:55.265508 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:20:55 crc kubenswrapper[4678]: I1124 11:20:55.312054 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:20:55 crc kubenswrapper[4678]: I1124 11:20:55.431099 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:20:55 crc kubenswrapper[4678]: I1124 11:20:55.431210 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:20:55 crc kubenswrapper[4678]: I1124 11:20:55.476392 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:20:55 crc kubenswrapper[4678]: I1124 11:20:55.835436 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:20:55 crc kubenswrapper[4678]: I1124 11:20:55.837573 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:21:10 crc kubenswrapper[4678]: I1124 11:21:10.836621 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm"] Nov 24 11:21:10 crc kubenswrapper[4678]: I1124 11:21:10.840265 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm" Nov 24 11:21:10 crc kubenswrapper[4678]: I1124 11:21:10.842635 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Nov 24 11:21:10 crc kubenswrapper[4678]: I1124 11:21:10.843802 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Nov 24 11:21:10 crc kubenswrapper[4678]: I1124 11:21:10.843932 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm"] Nov 24 11:21:10 crc kubenswrapper[4678]: I1124 11:21:10.844041 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Nov 24 11:21:10 crc kubenswrapper[4678]: I1124 11:21:10.845052 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Nov 24 11:21:10 crc kubenswrapper[4678]: I1124 11:21:10.845750 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Nov 24 11:21:11 crc kubenswrapper[4678]: I1124 11:21:11.002496 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ccf1e08-a752-439d-b442-af04c9241b89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-xblhm\" (UID: \"7ccf1e08-a752-439d-b442-af04c9241b89\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm" Nov 24 11:21:11 crc kubenswrapper[4678]: I1124 11:21:11.002569 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/7ccf1e08-a752-439d-b442-af04c9241b89-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-xblhm\" (UID: \"7ccf1e08-a752-439d-b442-af04c9241b89\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm" Nov 24 11:21:11 crc kubenswrapper[4678]: I1124 11:21:11.002643 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpk4f\" (UniqueName: \"kubernetes.io/projected/7ccf1e08-a752-439d-b442-af04c9241b89-kube-api-access-hpk4f\") pod \"cluster-monitoring-operator-6d5b84845-xblhm\" (UID: \"7ccf1e08-a752-439d-b442-af04c9241b89\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm" Nov 24 11:21:11 crc kubenswrapper[4678]: I1124 11:21:11.104365 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ccf1e08-a752-439d-b442-af04c9241b89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-xblhm\" (UID: \"7ccf1e08-a752-439d-b442-af04c9241b89\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm" Nov 24 11:21:11 crc kubenswrapper[4678]: I1124 11:21:11.104471 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/7ccf1e08-a752-439d-b442-af04c9241b89-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-xblhm\" (UID: \"7ccf1e08-a752-439d-b442-af04c9241b89\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm" Nov 24 11:21:11 crc kubenswrapper[4678]: I1124 11:21:11.104644 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpk4f\" (UniqueName: \"kubernetes.io/projected/7ccf1e08-a752-439d-b442-af04c9241b89-kube-api-access-hpk4f\") pod \"cluster-monitoring-operator-6d5b84845-xblhm\" (UID: \"7ccf1e08-a752-439d-b442-af04c9241b89\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm" Nov 24 11:21:11 crc kubenswrapper[4678]: I1124 11:21:11.105773 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/7ccf1e08-a752-439d-b442-af04c9241b89-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-xblhm\" (UID: \"7ccf1e08-a752-439d-b442-af04c9241b89\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm" Nov 24 11:21:11 crc kubenswrapper[4678]: I1124 11:21:11.114323 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ccf1e08-a752-439d-b442-af04c9241b89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-xblhm\" (UID: \"7ccf1e08-a752-439d-b442-af04c9241b89\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm" Nov 24 11:21:11 crc kubenswrapper[4678]: I1124 11:21:11.121938 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpk4f\" (UniqueName: \"kubernetes.io/projected/7ccf1e08-a752-439d-b442-af04c9241b89-kube-api-access-hpk4f\") pod \"cluster-monitoring-operator-6d5b84845-xblhm\" (UID: \"7ccf1e08-a752-439d-b442-af04c9241b89\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm" Nov 24 11:21:11 crc kubenswrapper[4678]: I1124 11:21:11.161535 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm" Nov 24 11:21:11 crc kubenswrapper[4678]: I1124 11:21:11.600412 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm"] Nov 24 11:21:11 crc kubenswrapper[4678]: W1124 11:21:11.603801 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ccf1e08_a752_439d_b442_af04c9241b89.slice/crio-b4e719f04d47dcc844f9eb02b06a77e5c9586fcae147966f99d6207958f332ef WatchSource:0}: Error finding container b4e719f04d47dcc844f9eb02b06a77e5c9586fcae147966f99d6207958f332ef: Status 404 returned error can't find the container with id b4e719f04d47dcc844f9eb02b06a77e5c9586fcae147966f99d6207958f332ef Nov 24 11:21:11 crc kubenswrapper[4678]: I1124 11:21:11.903646 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm" event={"ID":"7ccf1e08-a752-439d-b442-af04c9241b89","Type":"ContainerStarted","Data":"b4e719f04d47dcc844f9eb02b06a77e5c9586fcae147966f99d6207958f332ef"} Nov 24 11:21:13 crc kubenswrapper[4678]: I1124 11:21:13.946576 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm" event={"ID":"7ccf1e08-a752-439d-b442-af04c9241b89","Type":"ContainerStarted","Data":"9e60aa30096a5932bb5df4507af60cb0644a048add26f10e219b94cdce7fc708"} Nov 24 11:21:13 crc kubenswrapper[4678]: I1124 11:21:13.979404 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xblhm" podStartSLOduration=1.852562967 podStartE2EDuration="3.979374814s" podCreationTimestamp="2025-11-24 11:21:10 +0000 UTC" firstStartedPulling="2025-11-24 11:21:11.607826443 +0000 UTC m=+282.538886122" lastFinishedPulling="2025-11-24 11:21:13.73463833 +0000 UTC m=+284.665697969" observedRunningTime="2025-11-24 11:21:13.97752936 +0000 UTC m=+284.908589059" watchObservedRunningTime="2025-11-24 11:21:13.979374814 +0000 UTC m=+284.910434453" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.347837 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5dt65"] Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.349137 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.375766 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5dt65"] Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.440712 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zzw5g"] Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.441738 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zzw5g" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.444320 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-whbz6" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.445021 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.452225 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc8881b4-a724-4565-9742-c3b25980be71-trusted-ca\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.452286 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.452340 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cc8881b4-a724-4565-9742-c3b25980be71-registry-certificates\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.452371 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cc8881b4-a724-4565-9742-c3b25980be71-registry-tls\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.452390 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cc8881b4-a724-4565-9742-c3b25980be71-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.452421 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm2nd\" (UniqueName: \"kubernetes.io/projected/cc8881b4-a724-4565-9742-c3b25980be71-kube-api-access-qm2nd\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.452443 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc8881b4-a724-4565-9742-c3b25980be71-bound-sa-token\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.452547 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cc8881b4-a724-4565-9742-c3b25980be71-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.456130 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zzw5g"] Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.483793 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.554369 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc8881b4-a724-4565-9742-c3b25980be71-trusted-ca\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.554447 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cc8881b4-a724-4565-9742-c3b25980be71-registry-certificates\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.554478 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cc8881b4-a724-4565-9742-c3b25980be71-registry-tls\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.554499 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cc8881b4-a724-4565-9742-c3b25980be71-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.554519 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm2nd\" (UniqueName: \"kubernetes.io/projected/cc8881b4-a724-4565-9742-c3b25980be71-kube-api-access-qm2nd\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.554535 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc8881b4-a724-4565-9742-c3b25980be71-bound-sa-token\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.554559 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/cd5dfd85-7044-4ced-84b0-98670fdff593-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-zzw5g\" (UID: \"cd5dfd85-7044-4ced-84b0-98670fdff593\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zzw5g" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.554578 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cc8881b4-a724-4565-9742-c3b25980be71-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.555716 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cc8881b4-a724-4565-9742-c3b25980be71-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.556397 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cc8881b4-a724-4565-9742-c3b25980be71-registry-certificates\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.556451 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc8881b4-a724-4565-9742-c3b25980be71-trusted-ca\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.561144 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cc8881b4-a724-4565-9742-c3b25980be71-registry-tls\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.561516 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cc8881b4-a724-4565-9742-c3b25980be71-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.575930 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc8881b4-a724-4565-9742-c3b25980be71-bound-sa-token\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.579307 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm2nd\" (UniqueName: \"kubernetes.io/projected/cc8881b4-a724-4565-9742-c3b25980be71-kube-api-access-qm2nd\") pod \"image-registry-66df7c8f76-5dt65\" (UID: \"cc8881b4-a724-4565-9742-c3b25980be71\") " pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.655909 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/cd5dfd85-7044-4ced-84b0-98670fdff593-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-zzw5g\" (UID: \"cd5dfd85-7044-4ced-84b0-98670fdff593\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zzw5g" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.660189 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/cd5dfd85-7044-4ced-84b0-98670fdff593-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-zzw5g\" (UID: \"cd5dfd85-7044-4ced-84b0-98670fdff593\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zzw5g" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.671135 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:14 crc kubenswrapper[4678]: I1124 11:21:14.770101 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zzw5g" Nov 24 11:21:15 crc kubenswrapper[4678]: I1124 11:21:15.099016 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5dt65"] Nov 24 11:21:15 crc kubenswrapper[4678]: W1124 11:21:15.107288 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc8881b4_a724_4565_9742_c3b25980be71.slice/crio-a41234a8c0be9f41b3251d6331208c5e5647241d0ff17df66851ca0272e73ad7 WatchSource:0}: Error finding container a41234a8c0be9f41b3251d6331208c5e5647241d0ff17df66851ca0272e73ad7: Status 404 returned error can't find the container with id a41234a8c0be9f41b3251d6331208c5e5647241d0ff17df66851ca0272e73ad7 Nov 24 11:21:15 crc kubenswrapper[4678]: I1124 11:21:15.208929 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zzw5g"] Nov 24 11:21:15 crc kubenswrapper[4678]: I1124 11:21:15.974312 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zzw5g" event={"ID":"cd5dfd85-7044-4ced-84b0-98670fdff593","Type":"ContainerStarted","Data":"d1ff2466758d52a7807165aaf1bc93c4ef9ed39ab3a09041490d2521dbcfdbd4"} Nov 24 11:21:15 crc kubenswrapper[4678]: I1124 11:21:15.976501 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" event={"ID":"cc8881b4-a724-4565-9742-c3b25980be71","Type":"ContainerStarted","Data":"09548986867e39036499d3258f11e0ce4f6d01ba4945f3bd8ba4b7717a055497"} Nov 24 11:21:15 crc kubenswrapper[4678]: I1124 11:21:15.976571 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" event={"ID":"cc8881b4-a724-4565-9742-c3b25980be71","Type":"ContainerStarted","Data":"a41234a8c0be9f41b3251d6331208c5e5647241d0ff17df66851ca0272e73ad7"} Nov 24 11:21:15 crc kubenswrapper[4678]: I1124 11:21:15.976744 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:16 crc kubenswrapper[4678]: I1124 11:21:16.011806 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" podStartSLOduration=2.01176154 podStartE2EDuration="2.01176154s" podCreationTimestamp="2025-11-24 11:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:21:16.003777236 +0000 UTC m=+286.934836905" watchObservedRunningTime="2025-11-24 11:21:16.01176154 +0000 UTC m=+286.942821219" Nov 24 11:21:17 crc kubenswrapper[4678]: I1124 11:21:17.990555 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zzw5g" event={"ID":"cd5dfd85-7044-4ced-84b0-98670fdff593","Type":"ContainerStarted","Data":"1cf5bb8fd9573f0fb7a872bbac779fc3381dfc2f869501074fb4e6eb061131d2"} Nov 24 11:21:17 crc kubenswrapper[4678]: I1124 11:21:17.991836 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zzw5g" Nov 24 11:21:17 crc kubenswrapper[4678]: I1124 11:21:17.997916 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zzw5g" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.012487 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zzw5g" podStartSLOduration=1.718198839 podStartE2EDuration="4.012456784s" podCreationTimestamp="2025-11-24 11:21:14 +0000 UTC" firstStartedPulling="2025-11-24 11:21:15.223905929 +0000 UTC m=+286.154965568" lastFinishedPulling="2025-11-24 11:21:17.518163874 +0000 UTC m=+288.449223513" observedRunningTime="2025-11-24 11:21:18.008143077 +0000 UTC m=+288.939202756" watchObservedRunningTime="2025-11-24 11:21:18.012456784 +0000 UTC m=+288.943516423" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.511903 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-px92x"] Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.512907 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.516183 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.516971 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-hpdb7" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.517182 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.517581 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.524835 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-px92x"] Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.636213 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d1bfa29d-91e4-4187-9bcf-ca2fb0391a82-metrics-client-ca\") pod \"prometheus-operator-db54df47d-px92x\" (UID: \"d1bfa29d-91e4-4187-9bcf-ca2fb0391a82\") " pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.636830 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d1bfa29d-91e4-4187-9bcf-ca2fb0391a82-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-px92x\" (UID: \"d1bfa29d-91e4-4187-9bcf-ca2fb0391a82\") " pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.636891 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79djx\" (UniqueName: \"kubernetes.io/projected/d1bfa29d-91e4-4187-9bcf-ca2fb0391a82-kube-api-access-79djx\") pod \"prometheus-operator-db54df47d-px92x\" (UID: \"d1bfa29d-91e4-4187-9bcf-ca2fb0391a82\") " pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.636955 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1bfa29d-91e4-4187-9bcf-ca2fb0391a82-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-px92x\" (UID: \"d1bfa29d-91e4-4187-9bcf-ca2fb0391a82\") " pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.738950 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d1bfa29d-91e4-4187-9bcf-ca2fb0391a82-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-px92x\" (UID: \"d1bfa29d-91e4-4187-9bcf-ca2fb0391a82\") " pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.739043 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79djx\" (UniqueName: \"kubernetes.io/projected/d1bfa29d-91e4-4187-9bcf-ca2fb0391a82-kube-api-access-79djx\") pod \"prometheus-operator-db54df47d-px92x\" (UID: \"d1bfa29d-91e4-4187-9bcf-ca2fb0391a82\") " pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.739112 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1bfa29d-91e4-4187-9bcf-ca2fb0391a82-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-px92x\" (UID: \"d1bfa29d-91e4-4187-9bcf-ca2fb0391a82\") " pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.739214 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d1bfa29d-91e4-4187-9bcf-ca2fb0391a82-metrics-client-ca\") pod \"prometheus-operator-db54df47d-px92x\" (UID: \"d1bfa29d-91e4-4187-9bcf-ca2fb0391a82\") " pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.740505 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d1bfa29d-91e4-4187-9bcf-ca2fb0391a82-metrics-client-ca\") pod \"prometheus-operator-db54df47d-px92x\" (UID: \"d1bfa29d-91e4-4187-9bcf-ca2fb0391a82\") " pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.747110 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d1bfa29d-91e4-4187-9bcf-ca2fb0391a82-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-px92x\" (UID: \"d1bfa29d-91e4-4187-9bcf-ca2fb0391a82\") " pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.747148 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1bfa29d-91e4-4187-9bcf-ca2fb0391a82-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-px92x\" (UID: \"d1bfa29d-91e4-4187-9bcf-ca2fb0391a82\") " pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.756270 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79djx\" (UniqueName: \"kubernetes.io/projected/d1bfa29d-91e4-4187-9bcf-ca2fb0391a82-kube-api-access-79djx\") pod \"prometheus-operator-db54df47d-px92x\" (UID: \"d1bfa29d-91e4-4187-9bcf-ca2fb0391a82\") " pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" Nov 24 11:21:18 crc kubenswrapper[4678]: I1124 11:21:18.833517 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" Nov 24 11:21:19 crc kubenswrapper[4678]: I1124 11:21:19.288034 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-px92x"] Nov 24 11:21:19 crc kubenswrapper[4678]: W1124 11:21:19.296755 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1bfa29d_91e4_4187_9bcf_ca2fb0391a82.slice/crio-eb48234c8e3303b5c017c5f3dd233c5aa0a83f1bb0dcefa68a087c088a7a0605 WatchSource:0}: Error finding container eb48234c8e3303b5c017c5f3dd233c5aa0a83f1bb0dcefa68a087c088a7a0605: Status 404 returned error can't find the container with id eb48234c8e3303b5c017c5f3dd233c5aa0a83f1bb0dcefa68a087c088a7a0605 Nov 24 11:21:20 crc kubenswrapper[4678]: I1124 11:21:20.007811 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" event={"ID":"d1bfa29d-91e4-4187-9bcf-ca2fb0391a82","Type":"ContainerStarted","Data":"eb48234c8e3303b5c017c5f3dd233c5aa0a83f1bb0dcefa68a087c088a7a0605"} Nov 24 11:21:22 crc kubenswrapper[4678]: I1124 11:21:22.030295 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" event={"ID":"d1bfa29d-91e4-4187-9bcf-ca2fb0391a82","Type":"ContainerStarted","Data":"97d3930c53da42d2ba3de32acfe96b762752c21c96f502f46d6c11211e3f6a97"} Nov 24 11:21:22 crc kubenswrapper[4678]: I1124 11:21:22.030361 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" event={"ID":"d1bfa29d-91e4-4187-9bcf-ca2fb0391a82","Type":"ContainerStarted","Data":"0aefd36d6df8837eb19b38d06b134ccff1343949c5447b6ffdd1d4e5a3c8a4a2"} Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.863370 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-px92x" podStartSLOduration=3.911239052 podStartE2EDuration="5.863347925s" podCreationTimestamp="2025-11-24 11:21:18 +0000 UTC" firstStartedPulling="2025-11-24 11:21:19.299843201 +0000 UTC m=+290.230902840" lastFinishedPulling="2025-11-24 11:21:21.251952074 +0000 UTC m=+292.183011713" observedRunningTime="2025-11-24 11:21:22.050994606 +0000 UTC m=+292.982054265" watchObservedRunningTime="2025-11-24 11:21:23.863347925 +0000 UTC m=+294.794407564" Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.866538 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn"] Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.867728 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.871780 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-tcgxs" Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.873296 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.874412 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.875379 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-qvcnv"] Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.876547 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.880621 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.881887 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-dtfxg" Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.887448 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn"] Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.890383 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.981770 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96"] Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.983492 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.985098 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96"] Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.987403 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-dgmwr" Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.987830 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.988094 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Nov 24 11:21:23 crc kubenswrapper[4678]: I1124 11:21:23.988338 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.069919 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aec5de7b-1076-48d3-8d47-361adecc20ed-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.069979 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/aec5de7b-1076-48d3-8d47-361adecc20ed-root\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.070007 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1f934df4-571e-4adf-b3de-85a37069651b-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-9mwwn\" (UID: \"1f934df4-571e-4adf-b3de-85a37069651b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.070048 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckkhx\" (UniqueName: \"kubernetes.io/projected/18aaa080-366d-44cc-a6ce-d3f265bd9e46-kube-api-access-ckkhx\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.070072 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/18aaa080-366d-44cc-a6ce-d3f265bd9e46-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.070095 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/aec5de7b-1076-48d3-8d47-361adecc20ed-node-exporter-wtmp\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.070114 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/18aaa080-366d-44cc-a6ce-d3f265bd9e46-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.070160 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2pmk\" (UniqueName: \"kubernetes.io/projected/aec5de7b-1076-48d3-8d47-361adecc20ed-kube-api-access-h2pmk\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.070220 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aec5de7b-1076-48d3-8d47-361adecc20ed-metrics-client-ca\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.070241 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aec5de7b-1076-48d3-8d47-361adecc20ed-sys\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.070270 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1f934df4-571e-4adf-b3de-85a37069651b-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-9mwwn\" (UID: \"1f934df4-571e-4adf-b3de-85a37069651b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.070287 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xjpv\" (UniqueName: \"kubernetes.io/projected/1f934df4-571e-4adf-b3de-85a37069651b-kube-api-access-7xjpv\") pod \"openshift-state-metrics-566fddb674-9mwwn\" (UID: \"1f934df4-571e-4adf-b3de-85a37069651b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.070310 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1f934df4-571e-4adf-b3de-85a37069651b-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-9mwwn\" (UID: \"1f934df4-571e-4adf-b3de-85a37069651b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.070350 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/18aaa080-366d-44cc-a6ce-d3f265bd9e46-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.070370 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/18aaa080-366d-44cc-a6ce-d3f265bd9e46-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.070390 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/aec5de7b-1076-48d3-8d47-361adecc20ed-node-exporter-tls\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.070406 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/aec5de7b-1076-48d3-8d47-361adecc20ed-node-exporter-textfile\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.070434 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/18aaa080-366d-44cc-a6ce-d3f265bd9e46-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.171433 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/18aaa080-366d-44cc-a6ce-d3f265bd9e46-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.171842 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/aec5de7b-1076-48d3-8d47-361adecc20ed-node-exporter-wtmp\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.171982 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/18aaa080-366d-44cc-a6ce-d3f265bd9e46-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.172140 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2pmk\" (UniqueName: \"kubernetes.io/projected/aec5de7b-1076-48d3-8d47-361adecc20ed-kube-api-access-h2pmk\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.172250 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/18aaa080-366d-44cc-a6ce-d3f265bd9e46-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.172192 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/aec5de7b-1076-48d3-8d47-361adecc20ed-node-exporter-wtmp\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.172502 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aec5de7b-1076-48d3-8d47-361adecc20ed-metrics-client-ca\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.173502 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aec5de7b-1076-48d3-8d47-361adecc20ed-sys\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.173433 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aec5de7b-1076-48d3-8d47-361adecc20ed-metrics-client-ca\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.173561 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/aec5de7b-1076-48d3-8d47-361adecc20ed-sys\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.173782 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1f934df4-571e-4adf-b3de-85a37069651b-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-9mwwn\" (UID: \"1f934df4-571e-4adf-b3de-85a37069651b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.174135 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xjpv\" (UniqueName: \"kubernetes.io/projected/1f934df4-571e-4adf-b3de-85a37069651b-kube-api-access-7xjpv\") pod \"openshift-state-metrics-566fddb674-9mwwn\" (UID: \"1f934df4-571e-4adf-b3de-85a37069651b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.174261 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1f934df4-571e-4adf-b3de-85a37069651b-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-9mwwn\" (UID: \"1f934df4-571e-4adf-b3de-85a37069651b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.174398 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/18aaa080-366d-44cc-a6ce-d3f265bd9e46-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.174956 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/18aaa080-366d-44cc-a6ce-d3f265bd9e46-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.175107 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/aec5de7b-1076-48d3-8d47-361adecc20ed-node-exporter-tls\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.175208 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/aec5de7b-1076-48d3-8d47-361adecc20ed-node-exporter-textfile\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.175316 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/18aaa080-366d-44cc-a6ce-d3f265bd9e46-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.175402 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aec5de7b-1076-48d3-8d47-361adecc20ed-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.175487 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/aec5de7b-1076-48d3-8d47-361adecc20ed-root\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.176277 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/18aaa080-366d-44cc-a6ce-d3f265bd9e46-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.175853 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/aec5de7b-1076-48d3-8d47-361adecc20ed-node-exporter-textfile\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.174834 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1f934df4-571e-4adf-b3de-85a37069651b-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-9mwwn\" (UID: \"1f934df4-571e-4adf-b3de-85a37069651b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.176014 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/aec5de7b-1076-48d3-8d47-361adecc20ed-root\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.176280 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1f934df4-571e-4adf-b3de-85a37069651b-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-9mwwn\" (UID: \"1f934df4-571e-4adf-b3de-85a37069651b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.176371 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckkhx\" (UniqueName: \"kubernetes.io/projected/18aaa080-366d-44cc-a6ce-d3f265bd9e46-kube-api-access-ckkhx\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.175741 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/18aaa080-366d-44cc-a6ce-d3f265bd9e46-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.181533 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/aec5de7b-1076-48d3-8d47-361adecc20ed-node-exporter-tls\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.181678 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/18aaa080-366d-44cc-a6ce-d3f265bd9e46-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.181844 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aec5de7b-1076-48d3-8d47-361adecc20ed-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.182026 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1f934df4-571e-4adf-b3de-85a37069651b-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-9mwwn\" (UID: \"1f934df4-571e-4adf-b3de-85a37069651b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.191626 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xjpv\" (UniqueName: \"kubernetes.io/projected/1f934df4-571e-4adf-b3de-85a37069651b-kube-api-access-7xjpv\") pod \"openshift-state-metrics-566fddb674-9mwwn\" (UID: \"1f934df4-571e-4adf-b3de-85a37069651b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.192737 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1f934df4-571e-4adf-b3de-85a37069651b-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-9mwwn\" (UID: \"1f934df4-571e-4adf-b3de-85a37069651b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.193168 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2pmk\" (UniqueName: \"kubernetes.io/projected/aec5de7b-1076-48d3-8d47-361adecc20ed-kube-api-access-h2pmk\") pod \"node-exporter-qvcnv\" (UID: \"aec5de7b-1076-48d3-8d47-361adecc20ed\") " pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.194648 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-qvcnv" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.195184 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/18aaa080-366d-44cc-a6ce-d3f265bd9e46-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.195449 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckkhx\" (UniqueName: \"kubernetes.io/projected/18aaa080-366d-44cc-a6ce-d3f265bd9e46-kube-api-access-ckkhx\") pod \"kube-state-metrics-777cb5bd5d-vst96\" (UID: \"18aaa080-366d-44cc-a6ce-d3f265bd9e46\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: W1124 11:21:24.218106 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaec5de7b_1076_48d3_8d47_361adecc20ed.slice/crio-349a3c52fa397ce30662e4d33deee9a8f7b86443d3aa685fc25fe5ba3f121893 WatchSource:0}: Error finding container 349a3c52fa397ce30662e4d33deee9a8f7b86443d3aa685fc25fe5ba3f121893: Status 404 returned error can't find the container with id 349a3c52fa397ce30662e4d33deee9a8f7b86443d3aa685fc25fe5ba3f121893 Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.314096 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.486647 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.729571 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96"] Nov 24 11:21:24 crc kubenswrapper[4678]: I1124 11:21:24.975039 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn"] Nov 24 11:21:24 crc kubenswrapper[4678]: W1124 11:21:24.984613 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f934df4_571e_4adf_b3de_85a37069651b.slice/crio-742774da42094d269812601137d9c28f731a3c6de6e3a906e2cce99fb916190d WatchSource:0}: Error finding container 742774da42094d269812601137d9c28f731a3c6de6e3a906e2cce99fb916190d: Status 404 returned error can't find the container with id 742774da42094d269812601137d9c28f731a3c6de6e3a906e2cce99fb916190d Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.050269 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" event={"ID":"18aaa080-366d-44cc-a6ce-d3f265bd9e46","Type":"ContainerStarted","Data":"5ed7426c379b41ace0bb1fa0ae06ebd59aeb917a1e81bb67800ea83eb273842a"} Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.051764 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" event={"ID":"1f934df4-571e-4adf-b3de-85a37069651b","Type":"ContainerStarted","Data":"742774da42094d269812601137d9c28f731a3c6de6e3a906e2cce99fb916190d"} Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.052797 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qvcnv" event={"ID":"aec5de7b-1076-48d3-8d47-361adecc20ed","Type":"ContainerStarted","Data":"349a3c52fa397ce30662e4d33deee9a8f7b86443d3aa685fc25fe5ba3f121893"} Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.088385 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.091168 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.097181 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-8tccg" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.097181 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.096880 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.097351 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.098501 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.098502 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.098639 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.098717 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.105456 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.126276 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.192308 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b4e21fd-069d-4684-aa2b-e47f75ec335b-tls-assets\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.192524 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.192707 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b4e21fd-069d-4684-aa2b-e47f75ec335b-config-out\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.192806 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.192909 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b4e21fd-069d-4684-aa2b-e47f75ec335b-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.193058 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-web-config\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.193174 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-config-volume\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.193308 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.193430 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7b4e21fd-069d-4684-aa2b-e47f75ec335b-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.193547 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.193666 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/7b4e21fd-069d-4684-aa2b-e47f75ec335b-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.193797 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj6xm\" (UniqueName: \"kubernetes.io/projected/7b4e21fd-069d-4684-aa2b-e47f75ec335b-kube-api-access-tj6xm\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.295730 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b4e21fd-069d-4684-aa2b-e47f75ec335b-config-out\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.296274 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.296302 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b4e21fd-069d-4684-aa2b-e47f75ec335b-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.296340 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-web-config\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.296363 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-config-volume\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.296392 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.296437 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7b4e21fd-069d-4684-aa2b-e47f75ec335b-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.296473 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.296505 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/7b4e21fd-069d-4684-aa2b-e47f75ec335b-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.296790 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj6xm\" (UniqueName: \"kubernetes.io/projected/7b4e21fd-069d-4684-aa2b-e47f75ec335b-kube-api-access-tj6xm\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.296896 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b4e21fd-069d-4684-aa2b-e47f75ec335b-tls-assets\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.296960 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.297398 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/7b4e21fd-069d-4684-aa2b-e47f75ec335b-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.300243 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7b4e21fd-069d-4684-aa2b-e47f75ec335b-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.301144 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b4e21fd-069d-4684-aa2b-e47f75ec335b-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.303015 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b4e21fd-069d-4684-aa2b-e47f75ec335b-config-out\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.303095 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.303183 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.303336 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b4e21fd-069d-4684-aa2b-e47f75ec335b-tls-assets\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.303955 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.309255 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-config-volume\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.310133 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-web-config\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.315626 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7b4e21fd-069d-4684-aa2b-e47f75ec335b-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.318935 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj6xm\" (UniqueName: \"kubernetes.io/projected/7b4e21fd-069d-4684-aa2b-e47f75ec335b-kube-api-access-tj6xm\") pod \"alertmanager-main-0\" (UID: \"7b4e21fd-069d-4684-aa2b-e47f75ec335b\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.413503 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.853317 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 24 11:21:25 crc kubenswrapper[4678]: W1124 11:21:25.859003 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b4e21fd_069d_4684_aa2b_e47f75ec335b.slice/crio-6df06f55c2b8818fc78ef7224c77920a66f8b2c2df27e7e0a1d119ddce5df852 WatchSource:0}: Error finding container 6df06f55c2b8818fc78ef7224c77920a66f8b2c2df27e7e0a1d119ddce5df852: Status 404 returned error can't find the container with id 6df06f55c2b8818fc78ef7224c77920a66f8b2c2df27e7e0a1d119ddce5df852 Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.946070 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-87676557-vrss6"] Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.948096 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.950385 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.950849 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.950944 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.951079 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-7u9fe25kntldp" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.951441 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.951519 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.952771 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-mxk6g" Nov 24 11:21:25 crc kubenswrapper[4678]: I1124 11:21:25.985185 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-87676557-vrss6"] Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.013015 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-grpc-tls\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.013118 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjmv5\" (UniqueName: \"kubernetes.io/projected/ee214084-e367-44d9-ad83-ba4f9297a829-kube-api-access-vjmv5\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.013167 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.013222 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.013258 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ee214084-e367-44d9-ad83-ba4f9297a829-metrics-client-ca\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.014064 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.014307 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-thanos-querier-tls\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.014657 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.062876 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" event={"ID":"1f934df4-571e-4adf-b3de-85a37069651b","Type":"ContainerStarted","Data":"f00c066a36a35a39830b1b628c5d45b5ad13c60ddbbcf604bc66309f34131cfa"} Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.062944 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" event={"ID":"1f934df4-571e-4adf-b3de-85a37069651b","Type":"ContainerStarted","Data":"bfeed7d69ff30555ba76b0254aa52b546897af5c77299ab8a711e9daa53bbd48"} Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.066036 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"7b4e21fd-069d-4684-aa2b-e47f75ec335b","Type":"ContainerStarted","Data":"6df06f55c2b8818fc78ef7224c77920a66f8b2c2df27e7e0a1d119ddce5df852"} Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.115962 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.116058 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-grpc-tls\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.116099 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjmv5\" (UniqueName: \"kubernetes.io/projected/ee214084-e367-44d9-ad83-ba4f9297a829-kube-api-access-vjmv5\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.116124 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.116148 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.116168 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ee214084-e367-44d9-ad83-ba4f9297a829-metrics-client-ca\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.116202 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.116239 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-thanos-querier-tls\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.117290 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ee214084-e367-44d9-ad83-ba4f9297a829-metrics-client-ca\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.123814 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.123962 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.124004 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.124196 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.124358 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-grpc-tls\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.124446 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/ee214084-e367-44d9-ad83-ba4f9297a829-secret-thanos-querier-tls\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.134433 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjmv5\" (UniqueName: \"kubernetes.io/projected/ee214084-e367-44d9-ad83-ba4f9297a829-kube-api-access-vjmv5\") pod \"thanos-querier-87676557-vrss6\" (UID: \"ee214084-e367-44d9-ad83-ba4f9297a829\") " pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.275994 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:26 crc kubenswrapper[4678]: I1124 11:21:26.507045 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-87676557-vrss6"] Nov 24 11:21:26 crc kubenswrapper[4678]: W1124 11:21:26.518367 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee214084_e367_44d9_ad83_ba4f9297a829.slice/crio-cc71f5e7ecbfa27f20291cc85e95822f2f874e596d65cfcc8e8e08632f5ab07c WatchSource:0}: Error finding container cc71f5e7ecbfa27f20291cc85e95822f2f874e596d65cfcc8e8e08632f5ab07c: Status 404 returned error can't find the container with id cc71f5e7ecbfa27f20291cc85e95822f2f874e596d65cfcc8e8e08632f5ab07c Nov 24 11:21:27 crc kubenswrapper[4678]: I1124 11:21:27.085394 4678 generic.go:334] "Generic (PLEG): container finished" podID="aec5de7b-1076-48d3-8d47-361adecc20ed" containerID="ac601c9b58f16d314ffdd04c504d1d538291d2d6d77ff1a5ae523c95829598c1" exitCode=0 Nov 24 11:21:27 crc kubenswrapper[4678]: I1124 11:21:27.085533 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qvcnv" event={"ID":"aec5de7b-1076-48d3-8d47-361adecc20ed","Type":"ContainerDied","Data":"ac601c9b58f16d314ffdd04c504d1d538291d2d6d77ff1a5ae523c95829598c1"} Nov 24 11:21:27 crc kubenswrapper[4678]: I1124 11:21:27.090120 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-87676557-vrss6" event={"ID":"ee214084-e367-44d9-ad83-ba4f9297a829","Type":"ContainerStarted","Data":"cc71f5e7ecbfa27f20291cc85e95822f2f874e596d65cfcc8e8e08632f5ab07c"} Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.104337 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" event={"ID":"18aaa080-366d-44cc-a6ce-d3f265bd9e46","Type":"ContainerStarted","Data":"54b278f71c3516f39a822032719e18c8ba73401d3ee278853069dff807f378d8"} Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.110712 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" event={"ID":"1f934df4-571e-4adf-b3de-85a37069651b","Type":"ContainerStarted","Data":"357d7be3bbb9e5e922e631bc7344d0370ba2c05b2861032bba7bad83474541a0"} Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.115161 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qvcnv" event={"ID":"aec5de7b-1076-48d3-8d47-361adecc20ed","Type":"ContainerStarted","Data":"a8bc5dccfa33e21062629a3c864408928ae1afcba17e62377fb4750a8f294c3b"} Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.177878 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-9mwwn" podStartSLOduration=3.154151418 podStartE2EDuration="5.177840329s" podCreationTimestamp="2025-11-24 11:21:23 +0000 UTC" firstStartedPulling="2025-11-24 11:21:25.341652401 +0000 UTC m=+296.272712030" lastFinishedPulling="2025-11-24 11:21:27.365341302 +0000 UTC m=+298.296400941" observedRunningTime="2025-11-24 11:21:28.141258263 +0000 UTC m=+299.072317922" watchObservedRunningTime="2025-11-24 11:21:28.177840329 +0000 UTC m=+299.108899968" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.206605 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-qvcnv" podStartSLOduration=3.522052118 podStartE2EDuration="5.206571395s" podCreationTimestamp="2025-11-24 11:21:23 +0000 UTC" firstStartedPulling="2025-11-24 11:21:24.220850589 +0000 UTC m=+295.151910228" lastFinishedPulling="2025-11-24 11:21:25.905369866 +0000 UTC m=+296.836429505" observedRunningTime="2025-11-24 11:21:28.201781914 +0000 UTC m=+299.132841553" watchObservedRunningTime="2025-11-24 11:21:28.206571395 +0000 UTC m=+299.137631034" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.716905 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7dbdb644bf-mkmpq"] Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.718234 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.737846 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7dbdb644bf-mkmpq"] Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.877478 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-serving-cert\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.877710 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-oauth-serving-cert\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.877748 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-config\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.877779 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-trusted-ca-bundle\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.877834 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-oauth-config\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.877897 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-service-ca\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.877919 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-524bl\" (UniqueName: \"kubernetes.io/projected/ccfeaa51-b66a-475f-9dae-985e6ab48407-kube-api-access-524bl\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.980253 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-trusted-ca-bundle\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.980352 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-oauth-config\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.980413 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-service-ca\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.980430 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-524bl\" (UniqueName: \"kubernetes.io/projected/ccfeaa51-b66a-475f-9dae-985e6ab48407-kube-api-access-524bl\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.980449 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-serving-cert\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.980490 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-oauth-serving-cert\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.980513 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-config\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.981544 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-config\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.982148 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-service-ca\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.983605 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-trusted-ca-bundle\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.985332 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-oauth-serving-cert\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.989367 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-serving-cert\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:28 crc kubenswrapper[4678]: I1124 11:21:28.997400 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-oauth-config\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.027756 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-524bl\" (UniqueName: \"kubernetes.io/projected/ccfeaa51-b66a-475f-9dae-985e6ab48407-kube-api-access-524bl\") pod \"console-7dbdb644bf-mkmpq\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.040342 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.142159 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" event={"ID":"18aaa080-366d-44cc-a6ce-d3f265bd9e46","Type":"ContainerStarted","Data":"e3b73b785a17cfc9a76cc7daa3646d36106c0a2d3cbd43872fab8e84a4ed6ce9"} Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.142207 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" event={"ID":"18aaa080-366d-44cc-a6ce-d3f265bd9e46","Type":"ContainerStarted","Data":"0d7d6a737695d8a20def922288ffb9af47277c43f78324134a57ede89c154d14"} Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.154131 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qvcnv" event={"ID":"aec5de7b-1076-48d3-8d47-361adecc20ed","Type":"ContainerStarted","Data":"b937e7c4bd99c5e00f8dc9690ae1796e556315e8c9fb089f4e2bc93845a087a0"} Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.160892 4678 generic.go:334] "Generic (PLEG): container finished" podID="7b4e21fd-069d-4684-aa2b-e47f75ec335b" containerID="9e452ebe228c900d48bc705972643b47f43973a6f1806cfc82943f4c26b8400c" exitCode=0 Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.161991 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"7b4e21fd-069d-4684-aa2b-e47f75ec335b","Type":"ContainerDied","Data":"9e452ebe228c900d48bc705972643b47f43973a6f1806cfc82943f4c26b8400c"} Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.162722 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vst96" podStartSLOduration=3.552313889 podStartE2EDuration="6.162670359s" podCreationTimestamp="2025-11-24 11:21:23 +0000 UTC" firstStartedPulling="2025-11-24 11:21:24.741878596 +0000 UTC m=+295.672938235" lastFinishedPulling="2025-11-24 11:21:27.352235066 +0000 UTC m=+298.283294705" observedRunningTime="2025-11-24 11:21:29.15791257 +0000 UTC m=+300.088972219" watchObservedRunningTime="2025-11-24 11:21:29.162670359 +0000 UTC m=+300.093729998" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.190477 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-5bf474f96b-4ntw2"] Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.191919 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.195863 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.196131 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-f286j6m05i0hq" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.199034 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-926rr" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.199275 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.199438 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.206881 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5bf474f96b-4ntw2"] Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.253880 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.390910 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6c25\" (UniqueName: \"kubernetes.io/projected/e531b581-8c14-4788-a28e-e08c82d9ee5d-kube-api-access-w6c25\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.391002 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/e531b581-8c14-4788-a28e-e08c82d9ee5d-metrics-server-audit-profiles\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.391043 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e531b581-8c14-4788-a28e-e08c82d9ee5d-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.391084 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e531b581-8c14-4788-a28e-e08c82d9ee5d-secret-metrics-client-certs\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.391140 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/e531b581-8c14-4788-a28e-e08c82d9ee5d-audit-log\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.391398 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/e531b581-8c14-4788-a28e-e08c82d9ee5d-secret-metrics-server-tls\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.391582 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e531b581-8c14-4788-a28e-e08c82d9ee5d-client-ca-bundle\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.492370 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6c25\" (UniqueName: \"kubernetes.io/projected/e531b581-8c14-4788-a28e-e08c82d9ee5d-kube-api-access-w6c25\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.492433 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/e531b581-8c14-4788-a28e-e08c82d9ee5d-metrics-server-audit-profiles\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.492463 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e531b581-8c14-4788-a28e-e08c82d9ee5d-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.492496 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e531b581-8c14-4788-a28e-e08c82d9ee5d-secret-metrics-client-certs\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.492533 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/e531b581-8c14-4788-a28e-e08c82d9ee5d-audit-log\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.492556 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/e531b581-8c14-4788-a28e-e08c82d9ee5d-secret-metrics-server-tls\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.492587 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e531b581-8c14-4788-a28e-e08c82d9ee5d-client-ca-bundle\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.493740 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/e531b581-8c14-4788-a28e-e08c82d9ee5d-audit-log\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.493791 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e531b581-8c14-4788-a28e-e08c82d9ee5d-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.494000 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/e531b581-8c14-4788-a28e-e08c82d9ee5d-metrics-server-audit-profiles\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.506029 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/e531b581-8c14-4788-a28e-e08c82d9ee5d-secret-metrics-server-tls\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.506413 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e531b581-8c14-4788-a28e-e08c82d9ee5d-secret-metrics-client-certs\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.508651 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e531b581-8c14-4788-a28e-e08c82d9ee5d-client-ca-bundle\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.509456 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6c25\" (UniqueName: \"kubernetes.io/projected/e531b581-8c14-4788-a28e-e08c82d9ee5d-kube-api-access-w6c25\") pod \"metrics-server-5bf474f96b-4ntw2\" (UID: \"e531b581-8c14-4788-a28e-e08c82d9ee5d\") " pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.567533 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.670335 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-bdb9d8cb6-4rwwv"] Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.671283 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-bdb9d8cb6-4rwwv" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.673954 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.679795 4678 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.682282 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-bdb9d8cb6-4rwwv"] Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.684524 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.797922 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c3f99f04-6cf9-47aa-a9ec-ee23ffb3d52a-monitoring-plugin-cert\") pod \"monitoring-plugin-bdb9d8cb6-4rwwv\" (UID: \"c3f99f04-6cf9-47aa-a9ec-ee23ffb3d52a\") " pod="openshift-monitoring/monitoring-plugin-bdb9d8cb6-4rwwv" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.900983 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c3f99f04-6cf9-47aa-a9ec-ee23ffb3d52a-monitoring-plugin-cert\") pod \"monitoring-plugin-bdb9d8cb6-4rwwv\" (UID: \"c3f99f04-6cf9-47aa-a9ec-ee23ffb3d52a\") " pod="openshift-monitoring/monitoring-plugin-bdb9d8cb6-4rwwv" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.903415 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Nov 24 11:21:29 crc kubenswrapper[4678]: I1124 11:21:29.942177 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c3f99f04-6cf9-47aa-a9ec-ee23ffb3d52a-monitoring-plugin-cert\") pod \"monitoring-plugin-bdb9d8cb6-4rwwv\" (UID: \"c3f99f04-6cf9-47aa-a9ec-ee23ffb3d52a\") " pod="openshift-monitoring/monitoring-plugin-bdb9d8cb6-4rwwv" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.013096 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.024856 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-bdb9d8cb6-4rwwv" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.181556 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-87676557-vrss6" event={"ID":"ee214084-e367-44d9-ad83-ba4f9297a829","Type":"ContainerStarted","Data":"dd9844ca08edfa6e9d5bd12c63916e48999243500acfea01d0192d6360369252"} Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.258048 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7dbdb644bf-mkmpq"] Nov 24 11:21:30 crc kubenswrapper[4678]: W1124 11:21:30.269573 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccfeaa51_b66a_475f_9dae_985e6ab48407.slice/crio-96887e8d3f6571cae642ef82a694a8b5ff031ec2f0d7a3313d5098e287dcf5b6 WatchSource:0}: Error finding container 96887e8d3f6571cae642ef82a694a8b5ff031ec2f0d7a3313d5098e287dcf5b6: Status 404 returned error can't find the container with id 96887e8d3f6571cae642ef82a694a8b5ff031ec2f0d7a3313d5098e287dcf5b6 Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.307896 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5bf474f96b-4ntw2"] Nov 24 11:21:30 crc kubenswrapper[4678]: W1124 11:21:30.336767 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode531b581_8c14_4788_a28e_e08c82d9ee5d.slice/crio-178a5967c207a6790471447021a08c3dad04ca32ff1d379da843e01c485025a4 WatchSource:0}: Error finding container 178a5967c207a6790471447021a08c3dad04ca32ff1d379da843e01c485025a4: Status 404 returned error can't find the container with id 178a5967c207a6790471447021a08c3dad04ca32ff1d379da843e01c485025a4 Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.341080 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.352464 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.368832 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.369128 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.369211 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.369635 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.373256 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.378867 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-vq9f2" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.378985 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.379139 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.378867 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.379640 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.384852 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-75724u70h2c0o" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.384990 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.413615 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.413900 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.528955 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-config\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529016 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8307fabe-610d-451b-86a0-8a5577f3b520-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529054 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rdp6\" (UniqueName: \"kubernetes.io/projected/8307fabe-610d-451b-86a0-8a5577f3b520-kube-api-access-2rdp6\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529109 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8307fabe-610d-451b-86a0-8a5577f3b520-config-out\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529148 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/8307fabe-610d-451b-86a0-8a5577f3b520-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529189 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529232 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529261 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529290 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8307fabe-610d-451b-86a0-8a5577f3b520-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529314 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529337 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8307fabe-610d-451b-86a0-8a5577f3b520-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529366 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8307fabe-610d-451b-86a0-8a5577f3b520-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529405 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8307fabe-610d-451b-86a0-8a5577f3b520-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529428 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529463 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8307fabe-610d-451b-86a0-8a5577f3b520-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529494 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-web-config\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529528 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.529566 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.552872 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-bdb9d8cb6-4rwwv"] Nov 24 11:21:30 crc kubenswrapper[4678]: W1124 11:21:30.572814 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3f99f04_6cf9_47aa_a9ec_ee23ffb3d52a.slice/crio-81a83553435b63d24aafaffa1a78525ed6187667338f3d906718923eba0bcfc1 WatchSource:0}: Error finding container 81a83553435b63d24aafaffa1a78525ed6187667338f3d906718923eba0bcfc1: Status 404 returned error can't find the container with id 81a83553435b63d24aafaffa1a78525ed6187667338f3d906718923eba0bcfc1 Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.630611 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8307fabe-610d-451b-86a0-8a5577f3b520-config-out\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.630695 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/8307fabe-610d-451b-86a0-8a5577f3b520-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.630727 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.630758 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.630780 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.630800 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8307fabe-610d-451b-86a0-8a5577f3b520-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.630816 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.630832 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8307fabe-610d-451b-86a0-8a5577f3b520-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.630852 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8307fabe-610d-451b-86a0-8a5577f3b520-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.630874 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8307fabe-610d-451b-86a0-8a5577f3b520-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.630891 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.630942 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8307fabe-610d-451b-86a0-8a5577f3b520-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.630970 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-web-config\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.630994 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.631018 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.631037 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-config\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.631054 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8307fabe-610d-451b-86a0-8a5577f3b520-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.631078 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rdp6\" (UniqueName: \"kubernetes.io/projected/8307fabe-610d-451b-86a0-8a5577f3b520-kube-api-access-2rdp6\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.631610 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/8307fabe-610d-451b-86a0-8a5577f3b520-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.634194 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8307fabe-610d-451b-86a0-8a5577f3b520-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.634903 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8307fabe-610d-451b-86a0-8a5577f3b520-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.635730 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8307fabe-610d-451b-86a0-8a5577f3b520-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.636941 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8307fabe-610d-451b-86a0-8a5577f3b520-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.638872 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.639845 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8307fabe-610d-451b-86a0-8a5577f3b520-config-out\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.640162 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.640922 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8307fabe-610d-451b-86a0-8a5577f3b520-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.645736 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.646143 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-config\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.646268 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-web-config\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.646315 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.648129 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.649522 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rdp6\" (UniqueName: \"kubernetes.io/projected/8307fabe-610d-451b-86a0-8a5577f3b520-kube-api-access-2rdp6\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.649864 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8307fabe-610d-451b-86a0-8a5577f3b520-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.650878 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.652075 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8307fabe-610d-451b-86a0-8a5577f3b520-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"8307fabe-610d-451b-86a0-8a5577f3b520\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:30 crc kubenswrapper[4678]: I1124 11:21:30.724340 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:31 crc kubenswrapper[4678]: I1124 11:21:31.170818 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 24 11:21:31 crc kubenswrapper[4678]: I1124 11:21:31.191647 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7dbdb644bf-mkmpq" event={"ID":"ccfeaa51-b66a-475f-9dae-985e6ab48407","Type":"ContainerStarted","Data":"b6afe0e52f729aee747b27632ba54dd158b131717ae926e9e141c60267862205"} Nov 24 11:21:31 crc kubenswrapper[4678]: I1124 11:21:31.191766 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7dbdb644bf-mkmpq" event={"ID":"ccfeaa51-b66a-475f-9dae-985e6ab48407","Type":"ContainerStarted","Data":"96887e8d3f6571cae642ef82a694a8b5ff031ec2f0d7a3313d5098e287dcf5b6"} Nov 24 11:21:31 crc kubenswrapper[4678]: I1124 11:21:31.194608 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8307fabe-610d-451b-86a0-8a5577f3b520","Type":"ContainerStarted","Data":"20d9672ad72cb26887871e1ee4041fb5691c1d3f12bf1114101de4e2f870ae2b"} Nov 24 11:21:31 crc kubenswrapper[4678]: I1124 11:21:31.196334 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" event={"ID":"e531b581-8c14-4788-a28e-e08c82d9ee5d","Type":"ContainerStarted","Data":"178a5967c207a6790471447021a08c3dad04ca32ff1d379da843e01c485025a4"} Nov 24 11:21:31 crc kubenswrapper[4678]: I1124 11:21:31.199447 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-bdb9d8cb6-4rwwv" event={"ID":"c3f99f04-6cf9-47aa-a9ec-ee23ffb3d52a","Type":"ContainerStarted","Data":"81a83553435b63d24aafaffa1a78525ed6187667338f3d906718923eba0bcfc1"} Nov 24 11:21:31 crc kubenswrapper[4678]: I1124 11:21:31.203091 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-87676557-vrss6" event={"ID":"ee214084-e367-44d9-ad83-ba4f9297a829","Type":"ContainerStarted","Data":"5da57a6605514a4c9e7eecd3686a6a92f3a784ec243cfa571488de31baafe01c"} Nov 24 11:21:31 crc kubenswrapper[4678]: I1124 11:21:31.203161 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-87676557-vrss6" event={"ID":"ee214084-e367-44d9-ad83-ba4f9297a829","Type":"ContainerStarted","Data":"264e657ea275e4525b8361cd00f278bd4e2f44b6a44710e58dbbf36cea9be22c"} Nov 24 11:21:31 crc kubenswrapper[4678]: I1124 11:21:31.218387 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7dbdb644bf-mkmpq" podStartSLOduration=3.21832035 podStartE2EDuration="3.21832035s" podCreationTimestamp="2025-11-24 11:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:21:31.210323525 +0000 UTC m=+302.141383184" watchObservedRunningTime="2025-11-24 11:21:31.21832035 +0000 UTC m=+302.149380029" Nov 24 11:21:32 crc kubenswrapper[4678]: I1124 11:21:32.212018 4678 generic.go:334] "Generic (PLEG): container finished" podID="8307fabe-610d-451b-86a0-8a5577f3b520" containerID="c455656d63855f58a191d6fcf1952d4c024ab38beb9562dd8646badeda94ad48" exitCode=0 Nov 24 11:21:32 crc kubenswrapper[4678]: I1124 11:21:32.212084 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8307fabe-610d-451b-86a0-8a5577f3b520","Type":"ContainerDied","Data":"c455656d63855f58a191d6fcf1952d4c024ab38beb9562dd8646badeda94ad48"} Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.233699 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-87676557-vrss6" event={"ID":"ee214084-e367-44d9-ad83-ba4f9297a829","Type":"ContainerStarted","Data":"5be65962b13273fcc6b9ac2900b97bf020571fb088c90223922e5609dfc9c9cc"} Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.234572 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.234586 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-87676557-vrss6" event={"ID":"ee214084-e367-44d9-ad83-ba4f9297a829","Type":"ContainerStarted","Data":"fd7f6b2443c0bff79931feb871a4176fbab6f44c0edab0c6fbf998af452485ba"} Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.234597 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-87676557-vrss6" event={"ID":"ee214084-e367-44d9-ad83-ba4f9297a829","Type":"ContainerStarted","Data":"a17619a4df945843064227b63db7c5beedd7b2675a445276080c4bbf2bf9484d"} Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.238054 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"7b4e21fd-069d-4684-aa2b-e47f75ec335b","Type":"ContainerStarted","Data":"f8d17fc31c475f28607ea5427e1f8e7a57b674e8bc3e5cc2999448c830831ee4"} Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.238116 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"7b4e21fd-069d-4684-aa2b-e47f75ec335b","Type":"ContainerStarted","Data":"c2e8adc3ef5b58ed24e8863d925fd96b2e86ee69c795ca22f0013260e699a8d5"} Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.238136 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"7b4e21fd-069d-4684-aa2b-e47f75ec335b","Type":"ContainerStarted","Data":"838b774185e0c74d310a42edfd708b31b8c01cf800ef784dd2fa8a94fdd16116"} Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.238147 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"7b4e21fd-069d-4684-aa2b-e47f75ec335b","Type":"ContainerStarted","Data":"f075c29e32c76b91a15cf91c333b1b6a1d45baf8815dbcec3304c5a154990549"} Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.238159 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"7b4e21fd-069d-4684-aa2b-e47f75ec335b","Type":"ContainerStarted","Data":"ce9be675d186877187695071d46d3513aa425282cef0a4e2f183f147075671b5"} Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.239800 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" event={"ID":"e531b581-8c14-4788-a28e-e08c82d9ee5d","Type":"ContainerStarted","Data":"f33c33c1272b36cdbe1af8732d78bbb0757eda9805fe89a1c6bc5a98b4b8dd00"} Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.241484 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-bdb9d8cb6-4rwwv" event={"ID":"c3f99f04-6cf9-47aa-a9ec-ee23ffb3d52a","Type":"ContainerStarted","Data":"39feb5674ff5181e23f1772ca04210f3aad615189b4be16e242525b723a69d34"} Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.241898 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-bdb9d8cb6-4rwwv" Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.249765 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-bdb9d8cb6-4rwwv" Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.273191 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-87676557-vrss6" podStartSLOduration=2.473269739 podStartE2EDuration="9.273128024s" podCreationTimestamp="2025-11-24 11:21:25 +0000 UTC" firstStartedPulling="2025-11-24 11:21:26.520273686 +0000 UTC m=+297.451333325" lastFinishedPulling="2025-11-24 11:21:33.320131971 +0000 UTC m=+304.251191610" observedRunningTime="2025-11-24 11:21:34.265394756 +0000 UTC m=+305.196454405" watchObservedRunningTime="2025-11-24 11:21:34.273128024 +0000 UTC m=+305.204187703" Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.300415 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" podStartSLOduration=2.218896228 podStartE2EDuration="5.300379326s" podCreationTimestamp="2025-11-24 11:21:29 +0000 UTC" firstStartedPulling="2025-11-24 11:21:30.347719153 +0000 UTC m=+301.278778792" lastFinishedPulling="2025-11-24 11:21:33.429202251 +0000 UTC m=+304.360261890" observedRunningTime="2025-11-24 11:21:34.297011827 +0000 UTC m=+305.228071486" watchObservedRunningTime="2025-11-24 11:21:34.300379326 +0000 UTC m=+305.231438965" Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.314706 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-bdb9d8cb6-4rwwv" podStartSLOduration=2.44011802 podStartE2EDuration="5.314654357s" podCreationTimestamp="2025-11-24 11:21:29 +0000 UTC" firstStartedPulling="2025-11-24 11:21:30.57591694 +0000 UTC m=+301.506976579" lastFinishedPulling="2025-11-24 11:21:33.450453277 +0000 UTC m=+304.381512916" observedRunningTime="2025-11-24 11:21:34.313322268 +0000 UTC m=+305.244381927" watchObservedRunningTime="2025-11-24 11:21:34.314654357 +0000 UTC m=+305.245713996" Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.677524 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-5dt65" Nov 24 11:21:34 crc kubenswrapper[4678]: I1124 11:21:34.740158 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vcwcn"] Nov 24 11:21:35 crc kubenswrapper[4678]: I1124 11:21:35.255289 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"7b4e21fd-069d-4684-aa2b-e47f75ec335b","Type":"ContainerStarted","Data":"680916b4b3bc6bbd1c3cac26fd37f44a43d6901424ad4b59ee0fb2b873acbef8"} Nov 24 11:21:35 crc kubenswrapper[4678]: I1124 11:21:35.269974 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-87676557-vrss6" Nov 24 11:21:35 crc kubenswrapper[4678]: I1124 11:21:35.303811 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.844572889 podStartE2EDuration="10.303787403s" podCreationTimestamp="2025-11-24 11:21:25 +0000 UTC" firstStartedPulling="2025-11-24 11:21:25.862066721 +0000 UTC m=+296.793126360" lastFinishedPulling="2025-11-24 11:21:33.321281235 +0000 UTC m=+304.252340874" observedRunningTime="2025-11-24 11:21:35.298490917 +0000 UTC m=+306.229550566" watchObservedRunningTime="2025-11-24 11:21:35.303787403 +0000 UTC m=+306.234847042" Nov 24 11:21:38 crc kubenswrapper[4678]: I1124 11:21:38.283410 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8307fabe-610d-451b-86a0-8a5577f3b520","Type":"ContainerStarted","Data":"ca49acaf4cd13c8c1658619f8d65a0a0d7a89bcd504586477eb8a5617f13dcac"} Nov 24 11:21:38 crc kubenswrapper[4678]: I1124 11:21:38.284194 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8307fabe-610d-451b-86a0-8a5577f3b520","Type":"ContainerStarted","Data":"0e1945231b23a69cc915732f223b4d4cf46f9306c44bcc0561189e482bc43e77"} Nov 24 11:21:38 crc kubenswrapper[4678]: I1124 11:21:38.284208 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8307fabe-610d-451b-86a0-8a5577f3b520","Type":"ContainerStarted","Data":"f32bdf078a0e6429b0ad8a9a5b537335bbb7e5250043753a35ef6b805f8aac3f"} Nov 24 11:21:38 crc kubenswrapper[4678]: I1124 11:21:38.284217 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8307fabe-610d-451b-86a0-8a5577f3b520","Type":"ContainerStarted","Data":"16a1806868652b30eaa3c437065cd59d73d585b2cdbad062116929df065b06d2"} Nov 24 11:21:38 crc kubenswrapper[4678]: I1124 11:21:38.284231 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8307fabe-610d-451b-86a0-8a5577f3b520","Type":"ContainerStarted","Data":"9af11abee732087ae12a7d9c0e49034b2f36f5f17e8c6014101ced111e22d500"} Nov 24 11:21:38 crc kubenswrapper[4678]: I1124 11:21:38.284240 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8307fabe-610d-451b-86a0-8a5577f3b520","Type":"ContainerStarted","Data":"1f89f34adfd8f76ad71d9a276d7cf5d5a2169585f8c17e0bb198d1da4e8f6c2d"} Nov 24 11:21:39 crc kubenswrapper[4678]: I1124 11:21:39.048129 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:39 crc kubenswrapper[4678]: I1124 11:21:39.048202 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:39 crc kubenswrapper[4678]: I1124 11:21:39.056055 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:39 crc kubenswrapper[4678]: I1124 11:21:39.296518 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:21:39 crc kubenswrapper[4678]: I1124 11:21:39.347251 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.2479706329999996 podStartE2EDuration="9.347211767s" podCreationTimestamp="2025-11-24 11:21:30 +0000 UTC" firstStartedPulling="2025-11-24 11:21:32.288462202 +0000 UTC m=+303.219521841" lastFinishedPulling="2025-11-24 11:21:37.387703336 +0000 UTC m=+308.318762975" observedRunningTime="2025-11-24 11:21:39.336302186 +0000 UTC m=+310.267361905" watchObservedRunningTime="2025-11-24 11:21:39.347211767 +0000 UTC m=+310.278271446" Nov 24 11:21:39 crc kubenswrapper[4678]: I1124 11:21:39.394282 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-chw9t"] Nov 24 11:21:40 crc kubenswrapper[4678]: I1124 11:21:40.724905 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:21:49 crc kubenswrapper[4678]: I1124 11:21:49.568523 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:49 crc kubenswrapper[4678]: I1124 11:21:49.569520 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:21:59 crc kubenswrapper[4678]: I1124 11:21:59.790644 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" podUID="5c1ade65-11e8-4529-9885-7630968a4b98" containerName="registry" containerID="cri-o://15397c68f5c5398ea2e1cd72a4edfbeded64269ddda27c00d35284d9316275c0" gracePeriod=30 Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.231443 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.297542 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.297658 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.305964 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-bound-sa-token\") pod \"5c1ade65-11e8-4529-9885-7630968a4b98\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.306029 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c1ade65-11e8-4529-9885-7630968a4b98-registry-certificates\") pod \"5c1ade65-11e8-4529-9885-7630968a4b98\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.306113 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c1ade65-11e8-4529-9885-7630968a4b98-installation-pull-secrets\") pod \"5c1ade65-11e8-4529-9885-7630968a4b98\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.306165 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c1ade65-11e8-4529-9885-7630968a4b98-ca-trust-extracted\") pod \"5c1ade65-11e8-4529-9885-7630968a4b98\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.308505 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5c1ade65-11e8-4529-9885-7630968a4b98\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.307799 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c1ade65-11e8-4529-9885-7630968a4b98-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5c1ade65-11e8-4529-9885-7630968a4b98" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.309266 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-registry-tls\") pod \"5c1ade65-11e8-4529-9885-7630968a4b98\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.309508 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl79j\" (UniqueName: \"kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-kube-api-access-rl79j\") pod \"5c1ade65-11e8-4529-9885-7630968a4b98\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.309626 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c1ade65-11e8-4529-9885-7630968a4b98-trusted-ca\") pod \"5c1ade65-11e8-4529-9885-7630968a4b98\" (UID: \"5c1ade65-11e8-4529-9885-7630968a4b98\") " Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.310392 4678 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c1ade65-11e8-4529-9885-7630968a4b98-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.311676 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c1ade65-11e8-4529-9885-7630968a4b98-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5c1ade65-11e8-4529-9885-7630968a4b98" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.315219 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5c1ade65-11e8-4529-9885-7630968a4b98" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.316060 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c1ade65-11e8-4529-9885-7630968a4b98-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5c1ade65-11e8-4529-9885-7630968a4b98" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.321499 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5c1ade65-11e8-4529-9885-7630968a4b98" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.322213 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-kube-api-access-rl79j" (OuterVolumeSpecName: "kube-api-access-rl79j") pod "5c1ade65-11e8-4529-9885-7630968a4b98" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98"). InnerVolumeSpecName "kube-api-access-rl79j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.323826 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "5c1ade65-11e8-4529-9885-7630968a4b98" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.323942 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c1ade65-11e8-4529-9885-7630968a4b98-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5c1ade65-11e8-4529-9885-7630968a4b98" (UID: "5c1ade65-11e8-4529-9885-7630968a4b98"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.412176 4678 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c1ade65-11e8-4529-9885-7630968a4b98-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.412235 4678 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.412259 4678 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c1ade65-11e8-4529-9885-7630968a4b98-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.412278 4678 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c1ade65-11e8-4529-9885-7630968a4b98-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.412297 4678 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.412319 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl79j\" (UniqueName: \"kubernetes.io/projected/5c1ade65-11e8-4529-9885-7630968a4b98-kube-api-access-rl79j\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.472532 4678 generic.go:334] "Generic (PLEG): container finished" podID="5c1ade65-11e8-4529-9885-7630968a4b98" containerID="15397c68f5c5398ea2e1cd72a4edfbeded64269ddda27c00d35284d9316275c0" exitCode=0 Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.472613 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" event={"ID":"5c1ade65-11e8-4529-9885-7630968a4b98","Type":"ContainerDied","Data":"15397c68f5c5398ea2e1cd72a4edfbeded64269ddda27c00d35284d9316275c0"} Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.472637 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.472664 4678 scope.go:117] "RemoveContainer" containerID="15397c68f5c5398ea2e1cd72a4edfbeded64269ddda27c00d35284d9316275c0" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.472649 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vcwcn" event={"ID":"5c1ade65-11e8-4529-9885-7630968a4b98","Type":"ContainerDied","Data":"a87560927fc8a854653bcf63cba96657e23cc3e7ae34b788013651b7de0f51c3"} Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.502729 4678 scope.go:117] "RemoveContainer" containerID="15397c68f5c5398ea2e1cd72a4edfbeded64269ddda27c00d35284d9316275c0" Nov 24 11:22:00 crc kubenswrapper[4678]: E1124 11:22:00.503365 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15397c68f5c5398ea2e1cd72a4edfbeded64269ddda27c00d35284d9316275c0\": container with ID starting with 15397c68f5c5398ea2e1cd72a4edfbeded64269ddda27c00d35284d9316275c0 not found: ID does not exist" containerID="15397c68f5c5398ea2e1cd72a4edfbeded64269ddda27c00d35284d9316275c0" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.503442 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15397c68f5c5398ea2e1cd72a4edfbeded64269ddda27c00d35284d9316275c0"} err="failed to get container status \"15397c68f5c5398ea2e1cd72a4edfbeded64269ddda27c00d35284d9316275c0\": rpc error: code = NotFound desc = could not find container \"15397c68f5c5398ea2e1cd72a4edfbeded64269ddda27c00d35284d9316275c0\": container with ID starting with 15397c68f5c5398ea2e1cd72a4edfbeded64269ddda27c00d35284d9316275c0 not found: ID does not exist" Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.515079 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vcwcn"] Nov 24 11:22:00 crc kubenswrapper[4678]: I1124 11:22:00.521901 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vcwcn"] Nov 24 11:22:01 crc kubenswrapper[4678]: I1124 11:22:01.905926 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c1ade65-11e8-4529-9885-7630968a4b98" path="/var/lib/kubelet/pods/5c1ade65-11e8-4529-9885-7630968a4b98/volumes" Nov 24 11:22:04 crc kubenswrapper[4678]: I1124 11:22:04.451599 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-chw9t" podUID="38101ae8-9e21-4a62-b839-cc42e0562769" containerName="console" containerID="cri-o://138260386cc840cb703f878bfa5634564534899ce2f347157ea66e9b1af25ebe" gracePeriod=15 Nov 24 11:22:04 crc kubenswrapper[4678]: I1124 11:22:04.944192 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-chw9t_38101ae8-9e21-4a62-b839-cc42e0562769/console/0.log" Nov 24 11:22:04 crc kubenswrapper[4678]: I1124 11:22:04.944760 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:22:04 crc kubenswrapper[4678]: I1124 11:22:04.999128 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-oauth-serving-cert\") pod \"38101ae8-9e21-4a62-b839-cc42e0562769\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " Nov 24 11:22:04 crc kubenswrapper[4678]: I1124 11:22:04.999173 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c75wp\" (UniqueName: \"kubernetes.io/projected/38101ae8-9e21-4a62-b839-cc42e0562769-kube-api-access-c75wp\") pod \"38101ae8-9e21-4a62-b839-cc42e0562769\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " Nov 24 11:22:04 crc kubenswrapper[4678]: I1124 11:22:04.999220 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-service-ca\") pod \"38101ae8-9e21-4a62-b839-cc42e0562769\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " Nov 24 11:22:04 crc kubenswrapper[4678]: I1124 11:22:04.999270 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-trusted-ca-bundle\") pod \"38101ae8-9e21-4a62-b839-cc42e0562769\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " Nov 24 11:22:04 crc kubenswrapper[4678]: I1124 11:22:04.999297 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/38101ae8-9e21-4a62-b839-cc42e0562769-console-oauth-config\") pod \"38101ae8-9e21-4a62-b839-cc42e0562769\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " Nov 24 11:22:04 crc kubenswrapper[4678]: I1124 11:22:04.999355 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/38101ae8-9e21-4a62-b839-cc42e0562769-console-serving-cert\") pod \"38101ae8-9e21-4a62-b839-cc42e0562769\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " Nov 24 11:22:04 crc kubenswrapper[4678]: I1124 11:22:04.999405 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-console-config\") pod \"38101ae8-9e21-4a62-b839-cc42e0562769\" (UID: \"38101ae8-9e21-4a62-b839-cc42e0562769\") " Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.000090 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "38101ae8-9e21-4a62-b839-cc42e0562769" (UID: "38101ae8-9e21-4a62-b839-cc42e0562769"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.000707 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-console-config" (OuterVolumeSpecName: "console-config") pod "38101ae8-9e21-4a62-b839-cc42e0562769" (UID: "38101ae8-9e21-4a62-b839-cc42e0562769"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.002575 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-service-ca" (OuterVolumeSpecName: "service-ca") pod "38101ae8-9e21-4a62-b839-cc42e0562769" (UID: "38101ae8-9e21-4a62-b839-cc42e0562769"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.003415 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "38101ae8-9e21-4a62-b839-cc42e0562769" (UID: "38101ae8-9e21-4a62-b839-cc42e0562769"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.010405 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38101ae8-9e21-4a62-b839-cc42e0562769-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "38101ae8-9e21-4a62-b839-cc42e0562769" (UID: "38101ae8-9e21-4a62-b839-cc42e0562769"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.012122 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38101ae8-9e21-4a62-b839-cc42e0562769-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "38101ae8-9e21-4a62-b839-cc42e0562769" (UID: "38101ae8-9e21-4a62-b839-cc42e0562769"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.013801 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38101ae8-9e21-4a62-b839-cc42e0562769-kube-api-access-c75wp" (OuterVolumeSpecName: "kube-api-access-c75wp") pod "38101ae8-9e21-4a62-b839-cc42e0562769" (UID: "38101ae8-9e21-4a62-b839-cc42e0562769"). InnerVolumeSpecName "kube-api-access-c75wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.101574 4678 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.101617 4678 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.101629 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c75wp\" (UniqueName: \"kubernetes.io/projected/38101ae8-9e21-4a62-b839-cc42e0562769-kube-api-access-c75wp\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.101652 4678 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.101665 4678 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38101ae8-9e21-4a62-b839-cc42e0562769-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.101697 4678 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/38101ae8-9e21-4a62-b839-cc42e0562769-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.101709 4678 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/38101ae8-9e21-4a62-b839-cc42e0562769-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.512503 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-chw9t_38101ae8-9e21-4a62-b839-cc42e0562769/console/0.log" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.513133 4678 generic.go:334] "Generic (PLEG): container finished" podID="38101ae8-9e21-4a62-b839-cc42e0562769" containerID="138260386cc840cb703f878bfa5634564534899ce2f347157ea66e9b1af25ebe" exitCode=2 Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.513190 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-chw9t" event={"ID":"38101ae8-9e21-4a62-b839-cc42e0562769","Type":"ContainerDied","Data":"138260386cc840cb703f878bfa5634564534899ce2f347157ea66e9b1af25ebe"} Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.513248 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-chw9t" event={"ID":"38101ae8-9e21-4a62-b839-cc42e0562769","Type":"ContainerDied","Data":"0e8e3fc47eb350b153d883c87c4ba354dbb2ff870269e5049d32cc6a4f857ee8"} Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.513279 4678 scope.go:117] "RemoveContainer" containerID="138260386cc840cb703f878bfa5634564534899ce2f347157ea66e9b1af25ebe" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.513284 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-chw9t" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.543398 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-chw9t"] Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.545538 4678 scope.go:117] "RemoveContainer" containerID="138260386cc840cb703f878bfa5634564534899ce2f347157ea66e9b1af25ebe" Nov 24 11:22:05 crc kubenswrapper[4678]: E1124 11:22:05.546115 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"138260386cc840cb703f878bfa5634564534899ce2f347157ea66e9b1af25ebe\": container with ID starting with 138260386cc840cb703f878bfa5634564534899ce2f347157ea66e9b1af25ebe not found: ID does not exist" containerID="138260386cc840cb703f878bfa5634564534899ce2f347157ea66e9b1af25ebe" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.546195 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"138260386cc840cb703f878bfa5634564534899ce2f347157ea66e9b1af25ebe"} err="failed to get container status \"138260386cc840cb703f878bfa5634564534899ce2f347157ea66e9b1af25ebe\": rpc error: code = NotFound desc = could not find container \"138260386cc840cb703f878bfa5634564534899ce2f347157ea66e9b1af25ebe\": container with ID starting with 138260386cc840cb703f878bfa5634564534899ce2f347157ea66e9b1af25ebe not found: ID does not exist" Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.546637 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-chw9t"] Nov 24 11:22:05 crc kubenswrapper[4678]: I1124 11:22:05.910890 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38101ae8-9e21-4a62-b839-cc42e0562769" path="/var/lib/kubelet/pods/38101ae8-9e21-4a62-b839-cc42e0562769/volumes" Nov 24 11:22:09 crc kubenswrapper[4678]: I1124 11:22:09.575121 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:22:09 crc kubenswrapper[4678]: I1124 11:22:09.581306 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-5bf474f96b-4ntw2" Nov 24 11:22:30 crc kubenswrapper[4678]: I1124 11:22:30.297496 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:22:30 crc kubenswrapper[4678]: I1124 11:22:30.298467 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:22:30 crc kubenswrapper[4678]: I1124 11:22:30.725371 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:22:30 crc kubenswrapper[4678]: I1124 11:22:30.774049 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:22:31 crc kubenswrapper[4678]: I1124 11:22:31.761156 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 11:23:00 crc kubenswrapper[4678]: I1124 11:23:00.297354 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:23:00 crc kubenswrapper[4678]: I1124 11:23:00.297908 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:23:00 crc kubenswrapper[4678]: I1124 11:23:00.297958 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:23:00 crc kubenswrapper[4678]: I1124 11:23:00.298444 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71975c2ba1a669dde4cf0c96567433189448d817b027616751a53013ba5e4709"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:23:00 crc kubenswrapper[4678]: I1124 11:23:00.298515 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://71975c2ba1a669dde4cf0c96567433189448d817b027616751a53013ba5e4709" gracePeriod=600 Nov 24 11:23:00 crc kubenswrapper[4678]: I1124 11:23:00.922613 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="71975c2ba1a669dde4cf0c96567433189448d817b027616751a53013ba5e4709" exitCode=0 Nov 24 11:23:00 crc kubenswrapper[4678]: I1124 11:23:00.922716 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"71975c2ba1a669dde4cf0c96567433189448d817b027616751a53013ba5e4709"} Nov 24 11:23:00 crc kubenswrapper[4678]: I1124 11:23:00.923198 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"538be58fbebd66fe558f9e6e8bc6084171acfd8da3f2cb10d27be45e829cefaa"} Nov 24 11:23:00 crc kubenswrapper[4678]: I1124 11:23:00.923259 4678 scope.go:117] "RemoveContainer" containerID="251bb34d42f047f1b1f5c15691c174abe2436cc2b4f0e8e9e0eaf8ac321497c6" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.019996 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-9c8475f4f-bf2zx"] Nov 24 11:23:02 crc kubenswrapper[4678]: E1124 11:23:02.020642 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38101ae8-9e21-4a62-b839-cc42e0562769" containerName="console" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.020658 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="38101ae8-9e21-4a62-b839-cc42e0562769" containerName="console" Nov 24 11:23:02 crc kubenswrapper[4678]: E1124 11:23:02.020687 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1ade65-11e8-4529-9885-7630968a4b98" containerName="registry" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.020695 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1ade65-11e8-4529-9885-7630968a4b98" containerName="registry" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.020833 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="38101ae8-9e21-4a62-b839-cc42e0562769" containerName="console" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.020847 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c1ade65-11e8-4529-9885-7630968a4b98" containerName="registry" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.021309 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.029642 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-serving-cert\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.029944 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-oauth-config\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.030145 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-oauth-serving-cert\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.030289 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-trusted-ca-bundle\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.030372 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-config\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.030463 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmk6q\" (UniqueName: \"kubernetes.io/projected/5e7b135b-2235-4b47-b8f5-a44f4c91a099-kube-api-access-jmk6q\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.030609 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-service-ca\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.045227 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-9c8475f4f-bf2zx"] Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.132370 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-oauth-serving-cert\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.132451 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-trusted-ca-bundle\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.132472 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-config\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.132492 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmk6q\" (UniqueName: \"kubernetes.io/projected/5e7b135b-2235-4b47-b8f5-a44f4c91a099-kube-api-access-jmk6q\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.132531 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-service-ca\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.132557 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-serving-cert\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.132577 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-oauth-config\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.134210 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-service-ca\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.134209 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-config\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.134509 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-oauth-serving-cert\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.135038 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-trusted-ca-bundle\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.141621 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-oauth-config\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.142720 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-serving-cert\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.158758 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmk6q\" (UniqueName: \"kubernetes.io/projected/5e7b135b-2235-4b47-b8f5-a44f4c91a099-kube-api-access-jmk6q\") pod \"console-9c8475f4f-bf2zx\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.344516 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.553843 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-9c8475f4f-bf2zx"] Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.942207 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9c8475f4f-bf2zx" event={"ID":"5e7b135b-2235-4b47-b8f5-a44f4c91a099","Type":"ContainerStarted","Data":"2e65b299ccf4fa933d27102e319530e928265a8c5af93839dcad365757453004"} Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.942594 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9c8475f4f-bf2zx" event={"ID":"5e7b135b-2235-4b47-b8f5-a44f4c91a099","Type":"ContainerStarted","Data":"a45d68f1c245ae6870cfaa00309116cd9c0d92157cf2f31b786e0265331fb1d9"} Nov 24 11:23:02 crc kubenswrapper[4678]: I1124 11:23:02.961909 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-9c8475f4f-bf2zx" podStartSLOduration=0.961889104 podStartE2EDuration="961.889104ms" podCreationTimestamp="2025-11-24 11:23:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:23:02.960121663 +0000 UTC m=+393.891181322" watchObservedRunningTime="2025-11-24 11:23:02.961889104 +0000 UTC m=+393.892948753" Nov 24 11:23:12 crc kubenswrapper[4678]: I1124 11:23:12.345068 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:12 crc kubenswrapper[4678]: I1124 11:23:12.345879 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:12 crc kubenswrapper[4678]: I1124 11:23:12.353072 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:13 crc kubenswrapper[4678]: I1124 11:23:13.031700 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:23:13 crc kubenswrapper[4678]: I1124 11:23:13.096514 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7dbdb644bf-mkmpq"] Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.144476 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7dbdb644bf-mkmpq" podUID="ccfeaa51-b66a-475f-9dae-985e6ab48407" containerName="console" containerID="cri-o://b6afe0e52f729aee747b27632ba54dd158b131717ae926e9e141c60267862205" gracePeriod=15 Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.544768 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7dbdb644bf-mkmpq_ccfeaa51-b66a-475f-9dae-985e6ab48407/console/0.log" Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.545177 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.572800 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-oauth-serving-cert\") pod \"ccfeaa51-b66a-475f-9dae-985e6ab48407\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.572939 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-config\") pod \"ccfeaa51-b66a-475f-9dae-985e6ab48407\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.572987 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-serving-cert\") pod \"ccfeaa51-b66a-475f-9dae-985e6ab48407\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.573941 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ccfeaa51-b66a-475f-9dae-985e6ab48407" (UID: "ccfeaa51-b66a-475f-9dae-985e6ab48407"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.574000 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-config" (OuterVolumeSpecName: "console-config") pod "ccfeaa51-b66a-475f-9dae-985e6ab48407" (UID: "ccfeaa51-b66a-475f-9dae-985e6ab48407"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.574496 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-oauth-config\") pod \"ccfeaa51-b66a-475f-9dae-985e6ab48407\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.574545 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-trusted-ca-bundle\") pod \"ccfeaa51-b66a-475f-9dae-985e6ab48407\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.574617 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-service-ca\") pod \"ccfeaa51-b66a-475f-9dae-985e6ab48407\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.574788 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-524bl\" (UniqueName: \"kubernetes.io/projected/ccfeaa51-b66a-475f-9dae-985e6ab48407-kube-api-access-524bl\") pod \"ccfeaa51-b66a-475f-9dae-985e6ab48407\" (UID: \"ccfeaa51-b66a-475f-9dae-985e6ab48407\") " Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.575355 4678 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.575391 4678 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.575709 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ccfeaa51-b66a-475f-9dae-985e6ab48407" (UID: "ccfeaa51-b66a-475f-9dae-985e6ab48407"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.576319 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-service-ca" (OuterVolumeSpecName: "service-ca") pod "ccfeaa51-b66a-475f-9dae-985e6ab48407" (UID: "ccfeaa51-b66a-475f-9dae-985e6ab48407"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.582918 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccfeaa51-b66a-475f-9dae-985e6ab48407-kube-api-access-524bl" (OuterVolumeSpecName: "kube-api-access-524bl") pod "ccfeaa51-b66a-475f-9dae-985e6ab48407" (UID: "ccfeaa51-b66a-475f-9dae-985e6ab48407"). InnerVolumeSpecName "kube-api-access-524bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.582945 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ccfeaa51-b66a-475f-9dae-985e6ab48407" (UID: "ccfeaa51-b66a-475f-9dae-985e6ab48407"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.583574 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ccfeaa51-b66a-475f-9dae-985e6ab48407" (UID: "ccfeaa51-b66a-475f-9dae-985e6ab48407"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.677199 4678 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.677246 4678 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccfeaa51-b66a-475f-9dae-985e6ab48407-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.677257 4678 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.677271 4678 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccfeaa51-b66a-475f-9dae-985e6ab48407-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:23:38 crc kubenswrapper[4678]: I1124 11:23:38.677283 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-524bl\" (UniqueName: \"kubernetes.io/projected/ccfeaa51-b66a-475f-9dae-985e6ab48407-kube-api-access-524bl\") on node \"crc\" DevicePath \"\"" Nov 24 11:23:39 crc kubenswrapper[4678]: I1124 11:23:39.247030 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7dbdb644bf-mkmpq_ccfeaa51-b66a-475f-9dae-985e6ab48407/console/0.log" Nov 24 11:23:39 crc kubenswrapper[4678]: I1124 11:23:39.247215 4678 generic.go:334] "Generic (PLEG): container finished" podID="ccfeaa51-b66a-475f-9dae-985e6ab48407" containerID="b6afe0e52f729aee747b27632ba54dd158b131717ae926e9e141c60267862205" exitCode=2 Nov 24 11:23:39 crc kubenswrapper[4678]: I1124 11:23:39.247263 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7dbdb644bf-mkmpq" event={"ID":"ccfeaa51-b66a-475f-9dae-985e6ab48407","Type":"ContainerDied","Data":"b6afe0e52f729aee747b27632ba54dd158b131717ae926e9e141c60267862205"} Nov 24 11:23:39 crc kubenswrapper[4678]: I1124 11:23:39.247275 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7dbdb644bf-mkmpq" Nov 24 11:23:39 crc kubenswrapper[4678]: I1124 11:23:39.247309 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7dbdb644bf-mkmpq" event={"ID":"ccfeaa51-b66a-475f-9dae-985e6ab48407","Type":"ContainerDied","Data":"96887e8d3f6571cae642ef82a694a8b5ff031ec2f0d7a3313d5098e287dcf5b6"} Nov 24 11:23:39 crc kubenswrapper[4678]: I1124 11:23:39.247335 4678 scope.go:117] "RemoveContainer" containerID="b6afe0e52f729aee747b27632ba54dd158b131717ae926e9e141c60267862205" Nov 24 11:23:39 crc kubenswrapper[4678]: I1124 11:23:39.287778 4678 scope.go:117] "RemoveContainer" containerID="b6afe0e52f729aee747b27632ba54dd158b131717ae926e9e141c60267862205" Nov 24 11:23:39 crc kubenswrapper[4678]: E1124 11:23:39.288996 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6afe0e52f729aee747b27632ba54dd158b131717ae926e9e141c60267862205\": container with ID starting with b6afe0e52f729aee747b27632ba54dd158b131717ae926e9e141c60267862205 not found: ID does not exist" containerID="b6afe0e52f729aee747b27632ba54dd158b131717ae926e9e141c60267862205" Nov 24 11:23:39 crc kubenswrapper[4678]: I1124 11:23:39.289165 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6afe0e52f729aee747b27632ba54dd158b131717ae926e9e141c60267862205"} err="failed to get container status \"b6afe0e52f729aee747b27632ba54dd158b131717ae926e9e141c60267862205\": rpc error: code = NotFound desc = could not find container \"b6afe0e52f729aee747b27632ba54dd158b131717ae926e9e141c60267862205\": container with ID starting with b6afe0e52f729aee747b27632ba54dd158b131717ae926e9e141c60267862205 not found: ID does not exist" Nov 24 11:23:39 crc kubenswrapper[4678]: I1124 11:23:39.302662 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7dbdb644bf-mkmpq"] Nov 24 11:23:39 crc kubenswrapper[4678]: I1124 11:23:39.306755 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7dbdb644bf-mkmpq"] Nov 24 11:23:39 crc kubenswrapper[4678]: I1124 11:23:39.906505 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccfeaa51-b66a-475f-9dae-985e6ab48407" path="/var/lib/kubelet/pods/ccfeaa51-b66a-475f-9dae-985e6ab48407/volumes" Nov 24 11:25:00 crc kubenswrapper[4678]: I1124 11:25:00.296538 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:25:00 crc kubenswrapper[4678]: I1124 11:25:00.297076 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:25:30 crc kubenswrapper[4678]: I1124 11:25:30.297450 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:25:30 crc kubenswrapper[4678]: I1124 11:25:30.298044 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:25:57 crc kubenswrapper[4678]: I1124 11:25:57.928849 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk"] Nov 24 11:25:57 crc kubenswrapper[4678]: E1124 11:25:57.929728 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfeaa51-b66a-475f-9dae-985e6ab48407" containerName="console" Nov 24 11:25:57 crc kubenswrapper[4678]: I1124 11:25:57.929747 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfeaa51-b66a-475f-9dae-985e6ab48407" containerName="console" Nov 24 11:25:57 crc kubenswrapper[4678]: I1124 11:25:57.929882 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfeaa51-b66a-475f-9dae-985e6ab48407" containerName="console" Nov 24 11:25:57 crc kubenswrapper[4678]: I1124 11:25:57.930865 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" Nov 24 11:25:57 crc kubenswrapper[4678]: I1124 11:25:57.933072 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 11:25:57 crc kubenswrapper[4678]: I1124 11:25:57.939355 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk"] Nov 24 11:25:58 crc kubenswrapper[4678]: I1124 11:25:58.084415 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e0a12a4-1d26-4559-857f-6b9d4a76924d-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk\" (UID: \"4e0a12a4-1d26-4559-857f-6b9d4a76924d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" Nov 24 11:25:58 crc kubenswrapper[4678]: I1124 11:25:58.084748 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks722\" (UniqueName: \"kubernetes.io/projected/4e0a12a4-1d26-4559-857f-6b9d4a76924d-kube-api-access-ks722\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk\" (UID: \"4e0a12a4-1d26-4559-857f-6b9d4a76924d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" Nov 24 11:25:58 crc kubenswrapper[4678]: I1124 11:25:58.084775 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e0a12a4-1d26-4559-857f-6b9d4a76924d-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk\" (UID: \"4e0a12a4-1d26-4559-857f-6b9d4a76924d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" Nov 24 11:25:58 crc kubenswrapper[4678]: I1124 11:25:58.186129 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e0a12a4-1d26-4559-857f-6b9d4a76924d-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk\" (UID: \"4e0a12a4-1d26-4559-857f-6b9d4a76924d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" Nov 24 11:25:58 crc kubenswrapper[4678]: I1124 11:25:58.186213 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks722\" (UniqueName: \"kubernetes.io/projected/4e0a12a4-1d26-4559-857f-6b9d4a76924d-kube-api-access-ks722\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk\" (UID: \"4e0a12a4-1d26-4559-857f-6b9d4a76924d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" Nov 24 11:25:58 crc kubenswrapper[4678]: I1124 11:25:58.186635 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e0a12a4-1d26-4559-857f-6b9d4a76924d-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk\" (UID: \"4e0a12a4-1d26-4559-857f-6b9d4a76924d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" Nov 24 11:25:58 crc kubenswrapper[4678]: I1124 11:25:58.186732 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e0a12a4-1d26-4559-857f-6b9d4a76924d-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk\" (UID: \"4e0a12a4-1d26-4559-857f-6b9d4a76924d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" Nov 24 11:25:58 crc kubenswrapper[4678]: I1124 11:25:58.186938 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e0a12a4-1d26-4559-857f-6b9d4a76924d-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk\" (UID: \"4e0a12a4-1d26-4559-857f-6b9d4a76924d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" Nov 24 11:25:58 crc kubenswrapper[4678]: I1124 11:25:58.205661 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks722\" (UniqueName: \"kubernetes.io/projected/4e0a12a4-1d26-4559-857f-6b9d4a76924d-kube-api-access-ks722\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk\" (UID: \"4e0a12a4-1d26-4559-857f-6b9d4a76924d\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" Nov 24 11:25:58 crc kubenswrapper[4678]: I1124 11:25:58.293838 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" Nov 24 11:25:58 crc kubenswrapper[4678]: I1124 11:25:58.487158 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk"] Nov 24 11:25:58 crc kubenswrapper[4678]: I1124 11:25:58.785068 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" event={"ID":"4e0a12a4-1d26-4559-857f-6b9d4a76924d","Type":"ContainerStarted","Data":"de347c6ac18d0fad37aa32e9d3c639cf27679287e811339ba5e8e8c463e588da"} Nov 24 11:25:58 crc kubenswrapper[4678]: I1124 11:25:58.785513 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" event={"ID":"4e0a12a4-1d26-4559-857f-6b9d4a76924d","Type":"ContainerStarted","Data":"55213ccd4ca72d1b6ea728e1c53d3daafc3f560f7a656c961fed886fa1ac639d"} Nov 24 11:25:59 crc kubenswrapper[4678]: I1124 11:25:59.793524 4678 generic.go:334] "Generic (PLEG): container finished" podID="4e0a12a4-1d26-4559-857f-6b9d4a76924d" containerID="de347c6ac18d0fad37aa32e9d3c639cf27679287e811339ba5e8e8c463e588da" exitCode=0 Nov 24 11:25:59 crc kubenswrapper[4678]: I1124 11:25:59.793570 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" event={"ID":"4e0a12a4-1d26-4559-857f-6b9d4a76924d","Type":"ContainerDied","Data":"de347c6ac18d0fad37aa32e9d3c639cf27679287e811339ba5e8e8c463e588da"} Nov 24 11:25:59 crc kubenswrapper[4678]: I1124 11:25:59.795860 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:26:00 crc kubenswrapper[4678]: I1124 11:26:00.296913 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:26:00 crc kubenswrapper[4678]: I1124 11:26:00.297000 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:26:00 crc kubenswrapper[4678]: I1124 11:26:00.297071 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:26:00 crc kubenswrapper[4678]: I1124 11:26:00.297865 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"538be58fbebd66fe558f9e6e8bc6084171acfd8da3f2cb10d27be45e829cefaa"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:26:00 crc kubenswrapper[4678]: I1124 11:26:00.297934 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://538be58fbebd66fe558f9e6e8bc6084171acfd8da3f2cb10d27be45e829cefaa" gracePeriod=600 Nov 24 11:26:00 crc kubenswrapper[4678]: I1124 11:26:00.804978 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="538be58fbebd66fe558f9e6e8bc6084171acfd8da3f2cb10d27be45e829cefaa" exitCode=0 Nov 24 11:26:00 crc kubenswrapper[4678]: I1124 11:26:00.805165 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"538be58fbebd66fe558f9e6e8bc6084171acfd8da3f2cb10d27be45e829cefaa"} Nov 24 11:26:00 crc kubenswrapper[4678]: I1124 11:26:00.805488 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"2bfe74ad72b1070a6c7e462d710c234790fcd2a6fff50a06b17d2f1671decd08"} Nov 24 11:26:00 crc kubenswrapper[4678]: I1124 11:26:00.805514 4678 scope.go:117] "RemoveContainer" containerID="71975c2ba1a669dde4cf0c96567433189448d817b027616751a53013ba5e4709" Nov 24 11:26:01 crc kubenswrapper[4678]: I1124 11:26:01.813142 4678 generic.go:334] "Generic (PLEG): container finished" podID="4e0a12a4-1d26-4559-857f-6b9d4a76924d" containerID="1741da27b949b54f3b6a4f99faf1a6b17275fd7b82c0efa8703eecc84ed9d7be" exitCode=0 Nov 24 11:26:01 crc kubenswrapper[4678]: I1124 11:26:01.813254 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" event={"ID":"4e0a12a4-1d26-4559-857f-6b9d4a76924d","Type":"ContainerDied","Data":"1741da27b949b54f3b6a4f99faf1a6b17275fd7b82c0efa8703eecc84ed9d7be"} Nov 24 11:26:02 crc kubenswrapper[4678]: I1124 11:26:02.830254 4678 generic.go:334] "Generic (PLEG): container finished" podID="4e0a12a4-1d26-4559-857f-6b9d4a76924d" containerID="c4aa8d8e41c1c68d6ada8d81a4610d7063b4d04a70706d7330ab266d53c47dda" exitCode=0 Nov 24 11:26:02 crc kubenswrapper[4678]: I1124 11:26:02.830590 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" event={"ID":"4e0a12a4-1d26-4559-857f-6b9d4a76924d","Type":"ContainerDied","Data":"c4aa8d8e41c1c68d6ada8d81a4610d7063b4d04a70706d7330ab266d53c47dda"} Nov 24 11:26:04 crc kubenswrapper[4678]: I1124 11:26:04.084661 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" Nov 24 11:26:04 crc kubenswrapper[4678]: I1124 11:26:04.198055 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e0a12a4-1d26-4559-857f-6b9d4a76924d-util\") pod \"4e0a12a4-1d26-4559-857f-6b9d4a76924d\" (UID: \"4e0a12a4-1d26-4559-857f-6b9d4a76924d\") " Nov 24 11:26:04 crc kubenswrapper[4678]: I1124 11:26:04.198121 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e0a12a4-1d26-4559-857f-6b9d4a76924d-bundle\") pod \"4e0a12a4-1d26-4559-857f-6b9d4a76924d\" (UID: \"4e0a12a4-1d26-4559-857f-6b9d4a76924d\") " Nov 24 11:26:04 crc kubenswrapper[4678]: I1124 11:26:04.198153 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks722\" (UniqueName: \"kubernetes.io/projected/4e0a12a4-1d26-4559-857f-6b9d4a76924d-kube-api-access-ks722\") pod \"4e0a12a4-1d26-4559-857f-6b9d4a76924d\" (UID: \"4e0a12a4-1d26-4559-857f-6b9d4a76924d\") " Nov 24 11:26:04 crc kubenswrapper[4678]: I1124 11:26:04.201615 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0a12a4-1d26-4559-857f-6b9d4a76924d-bundle" (OuterVolumeSpecName: "bundle") pod "4e0a12a4-1d26-4559-857f-6b9d4a76924d" (UID: "4e0a12a4-1d26-4559-857f-6b9d4a76924d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:26:04 crc kubenswrapper[4678]: I1124 11:26:04.204304 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0a12a4-1d26-4559-857f-6b9d4a76924d-kube-api-access-ks722" (OuterVolumeSpecName: "kube-api-access-ks722") pod "4e0a12a4-1d26-4559-857f-6b9d4a76924d" (UID: "4e0a12a4-1d26-4559-857f-6b9d4a76924d"). InnerVolumeSpecName "kube-api-access-ks722". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:04 crc kubenswrapper[4678]: I1124 11:26:04.212597 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0a12a4-1d26-4559-857f-6b9d4a76924d-util" (OuterVolumeSpecName: "util") pod "4e0a12a4-1d26-4559-857f-6b9d4a76924d" (UID: "4e0a12a4-1d26-4559-857f-6b9d4a76924d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:26:04 crc kubenswrapper[4678]: I1124 11:26:04.300035 4678 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e0a12a4-1d26-4559-857f-6b9d4a76924d-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:04 crc kubenswrapper[4678]: I1124 11:26:04.300450 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks722\" (UniqueName: \"kubernetes.io/projected/4e0a12a4-1d26-4559-857f-6b9d4a76924d-kube-api-access-ks722\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:04 crc kubenswrapper[4678]: I1124 11:26:04.300589 4678 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e0a12a4-1d26-4559-857f-6b9d4a76924d-util\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:04 crc kubenswrapper[4678]: I1124 11:26:04.849436 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" event={"ID":"4e0a12a4-1d26-4559-857f-6b9d4a76924d","Type":"ContainerDied","Data":"55213ccd4ca72d1b6ea728e1c53d3daafc3f560f7a656c961fed886fa1ac639d"} Nov 24 11:26:04 crc kubenswrapper[4678]: I1124 11:26:04.850192 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55213ccd4ca72d1b6ea728e1c53d3daafc3f560f7a656c961fed886fa1ac639d" Nov 24 11:26:04 crc kubenswrapper[4678]: I1124 11:26:04.850330 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk" Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.390185 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zsq5s"] Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.391474 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovn-controller" containerID="cri-o://aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631" gracePeriod=30 Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.391571 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="nbdb" containerID="cri-o://ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0" gracePeriod=30 Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.391621 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6" gracePeriod=30 Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.391746 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="sbdb" containerID="cri-o://acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0" gracePeriod=30 Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.391631 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="kube-rbac-proxy-node" containerID="cri-o://c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859" gracePeriod=30 Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.391843 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="northd" containerID="cri-o://634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9" gracePeriod=30 Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.391642 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovn-acl-logging" containerID="cri-o://09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6" gracePeriod=30 Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.430413 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovnkube-controller" containerID="cri-o://0a974bbe7632470d424b26235d56421761fefeb71b2355e01b646decde9d5693" gracePeriod=30 Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.889613 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovnkube-controller/3.log" Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.892317 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovn-acl-logging/0.log" Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.893899 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovn-controller/0.log" Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.894249 4678 generic.go:334] "Generic (PLEG): container finished" podID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerID="0a974bbe7632470d424b26235d56421761fefeb71b2355e01b646decde9d5693" exitCode=0 Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.894281 4678 generic.go:334] "Generic (PLEG): container finished" podID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerID="acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0" exitCode=0 Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.894290 4678 generic.go:334] "Generic (PLEG): container finished" podID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerID="ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0" exitCode=0 Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.894298 4678 generic.go:334] "Generic (PLEG): container finished" podID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerID="634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9" exitCode=0 Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.894305 4678 generic.go:334] "Generic (PLEG): container finished" podID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerID="09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6" exitCode=143 Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.894313 4678 generic.go:334] "Generic (PLEG): container finished" podID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerID="aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631" exitCode=143 Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.894353 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerDied","Data":"0a974bbe7632470d424b26235d56421761fefeb71b2355e01b646decde9d5693"} Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.894390 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerDied","Data":"acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0"} Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.894403 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerDied","Data":"ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0"} Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.894412 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerDied","Data":"634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9"} Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.894422 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerDied","Data":"09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6"} Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.894431 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerDied","Data":"aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631"} Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.894454 4678 scope.go:117] "RemoveContainer" containerID="ad9c48bf3a6894e720079c99a52f36875d315168923fcdfc0af5b71e0fe35938" Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.914882 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h24xv_f159c812-75d9-4ad6-9e20-4d208ffe42fb/kube-multus/2.log" Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.916987 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h24xv_f159c812-75d9-4ad6-9e20-4d208ffe42fb/kube-multus/1.log" Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.917037 4678 generic.go:334] "Generic (PLEG): container finished" podID="f159c812-75d9-4ad6-9e20-4d208ffe42fb" containerID="8bab327ee33ef6b6764f09a9c29750d42a06fb26d0580431da74c25580a9d952" exitCode=2 Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.917079 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h24xv" event={"ID":"f159c812-75d9-4ad6-9e20-4d208ffe42fb","Type":"ContainerDied","Data":"8bab327ee33ef6b6764f09a9c29750d42a06fb26d0580431da74c25580a9d952"} Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.917679 4678 scope.go:117] "RemoveContainer" containerID="8bab327ee33ef6b6764f09a9c29750d42a06fb26d0580431da74c25580a9d952" Nov 24 11:26:09 crc kubenswrapper[4678]: E1124 11:26:09.917860 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-h24xv_openshift-multus(f159c812-75d9-4ad6-9e20-4d208ffe42fb)\"" pod="openshift-multus/multus-h24xv" podUID="f159c812-75d9-4ad6-9e20-4d208ffe42fb" Nov 24 11:26:09 crc kubenswrapper[4678]: I1124 11:26:09.951072 4678 scope.go:117] "RemoveContainer" containerID="d533b7bca5d15993708d525de6488e5c07fddad973c2148c82257608bf32e801" Nov 24 11:26:10 crc kubenswrapper[4678]: I1124 11:26:10.928396 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovn-acl-logging/0.log" Nov 24 11:26:10 crc kubenswrapper[4678]: I1124 11:26:10.928897 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovn-controller/0.log" Nov 24 11:26:10 crc kubenswrapper[4678]: I1124 11:26:10.929559 4678 generic.go:334] "Generic (PLEG): container finished" podID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerID="498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6" exitCode=0 Nov 24 11:26:10 crc kubenswrapper[4678]: I1124 11:26:10.929586 4678 generic.go:334] "Generic (PLEG): container finished" podID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerID="c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859" exitCode=0 Nov 24 11:26:10 crc kubenswrapper[4678]: I1124 11:26:10.929642 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerDied","Data":"498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6"} Nov 24 11:26:10 crc kubenswrapper[4678]: I1124 11:26:10.929692 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerDied","Data":"c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859"} Nov 24 11:26:10 crc kubenswrapper[4678]: I1124 11:26:10.932972 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h24xv_f159c812-75d9-4ad6-9e20-4d208ffe42fb/kube-multus/2.log" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.090382 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovn-acl-logging/0.log" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.091096 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovn-controller/0.log" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.091635 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211104 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-openvswitch\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211201 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqfl5\" (UniqueName: \"kubernetes.io/projected/318b13d4-6c61-4b45-bb2f-0a7e243946a6-kube-api-access-vqfl5\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211246 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-log-socket\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211276 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovnkube-script-lib\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211298 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-run-ovn-kubernetes\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211335 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovnkube-config\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211353 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-run-netns\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211372 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-ovn\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211394 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-cni-netd\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211421 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-etc-openvswitch\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211462 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211516 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-env-overrides\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211535 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-kubelet\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211587 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-cni-bin\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211610 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-slash\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211650 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovn-node-metrics-cert\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211681 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-node-log\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211705 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-systemd-units\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211720 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-systemd\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211740 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-var-lib-openvswitch\") pod \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\" (UID: \"318b13d4-6c61-4b45-bb2f-0a7e243946a6\") " Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.211261 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.212174 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.212046 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.212059 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.212080 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-log-socket" (OuterVolumeSpecName: "log-socket") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.212098 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.212116 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.212125 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.212118 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-slash" (OuterVolumeSpecName: "host-slash") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.212135 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.212149 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-node-log" (OuterVolumeSpecName: "node-log") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.212153 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.212552 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.212658 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.212692 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.212713 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.220245 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/318b13d4-6c61-4b45-bb2f-0a7e243946a6-kube-api-access-vqfl5" (OuterVolumeSpecName: "kube-api-access-vqfl5") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "kube-api-access-vqfl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.220408 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.220798 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.232003 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "318b13d4-6c61-4b45-bb2f-0a7e243946a6" (UID: "318b13d4-6c61-4b45-bb2f-0a7e243946a6"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.266476 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6mz9z"] Nov 24 11:26:11 crc kubenswrapper[4678]: E1124 11:26:11.266825 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovnkube-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.266844 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovnkube-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: E1124 11:26:11.266857 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="kube-rbac-proxy-node" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.266864 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="kube-rbac-proxy-node" Nov 24 11:26:11 crc kubenswrapper[4678]: E1124 11:26:11.266879 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovn-acl-logging" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.266886 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovn-acl-logging" Nov 24 11:26:11 crc kubenswrapper[4678]: E1124 11:26:11.266898 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0a12a4-1d26-4559-857f-6b9d4a76924d" containerName="extract" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.266904 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0a12a4-1d26-4559-857f-6b9d4a76924d" containerName="extract" Nov 24 11:26:11 crc kubenswrapper[4678]: E1124 11:26:11.266917 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="northd" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.266924 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="northd" Nov 24 11:26:11 crc kubenswrapper[4678]: E1124 11:26:11.266939 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovnkube-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.266945 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovnkube-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: E1124 11:26:11.266953 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="kubecfg-setup" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.266958 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="kubecfg-setup" Nov 24 11:26:11 crc kubenswrapper[4678]: E1124 11:26:11.266968 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0a12a4-1d26-4559-857f-6b9d4a76924d" containerName="util" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.266975 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0a12a4-1d26-4559-857f-6b9d4a76924d" containerName="util" Nov 24 11:26:11 crc kubenswrapper[4678]: E1124 11:26:11.266983 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovnkube-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.266990 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovnkube-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: E1124 11:26:11.266998 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovn-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267003 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovn-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: E1124 11:26:11.267014 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267019 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 11:26:11 crc kubenswrapper[4678]: E1124 11:26:11.267031 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="sbdb" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267037 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="sbdb" Nov 24 11:26:11 crc kubenswrapper[4678]: E1124 11:26:11.267046 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="nbdb" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267051 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="nbdb" Nov 24 11:26:11 crc kubenswrapper[4678]: E1124 11:26:11.267059 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0a12a4-1d26-4559-857f-6b9d4a76924d" containerName="pull" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267064 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0a12a4-1d26-4559-857f-6b9d4a76924d" containerName="pull" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267167 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="sbdb" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267178 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovnkube-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267186 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovnkube-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267195 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="kube-rbac-proxy-node" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267206 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267217 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="nbdb" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267227 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovn-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267240 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0a12a4-1d26-4559-857f-6b9d4a76924d" containerName="extract" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267252 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovn-acl-logging" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267265 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="northd" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267276 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovnkube-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: E1124 11:26:11.267415 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovnkube-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267426 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovnkube-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: E1124 11:26:11.267441 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovnkube-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267448 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovnkube-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267595 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovnkube-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.267861 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" containerName="ovnkube-controller" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.269664 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.312909 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-cni-netd\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.312969 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-run-systemd\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.312992 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-systemd-units\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.313015 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-ovn-node-metrics-cert\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.313168 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-etc-openvswitch\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.313246 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-ovnkube-config\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.313403 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-run-ovn-kubernetes\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.313463 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-ovnkube-script-lib\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.313528 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-var-lib-openvswitch\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.313598 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-slash\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.313685 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-env-overrides\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.313758 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n2mr\" (UniqueName: \"kubernetes.io/projected/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-kube-api-access-2n2mr\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.313855 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-run-netns\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.313913 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-run-openvswitch\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.313976 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-log-socket\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314020 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-node-log\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314217 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-kubelet\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314271 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314289 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-cni-bin\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314330 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-run-ovn\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314412 4678 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314430 4678 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314448 4678 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314459 4678 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314471 4678 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314482 4678 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-slash\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314494 4678 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314505 4678 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-node-log\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314514 4678 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314524 4678 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314534 4678 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314545 4678 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314557 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqfl5\" (UniqueName: \"kubernetes.io/projected/318b13d4-6c61-4b45-bb2f-0a7e243946a6-kube-api-access-vqfl5\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314567 4678 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-log-socket\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314577 4678 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314588 4678 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314599 4678 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/318b13d4-6c61-4b45-bb2f-0a7e243946a6-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314609 4678 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314618 4678 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.314626 4678 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/318b13d4-6c61-4b45-bb2f-0a7e243946a6-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416521 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-run-ovn-kubernetes\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416593 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-ovnkube-script-lib\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416619 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-var-lib-openvswitch\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416646 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-slash\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416681 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-env-overrides\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416702 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n2mr\" (UniqueName: \"kubernetes.io/projected/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-kube-api-access-2n2mr\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416725 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-run-netns\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416742 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-run-openvswitch\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416764 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-log-socket\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416785 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-node-log\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416822 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-kubelet\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416845 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416868 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-cni-bin\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416891 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-run-ovn\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416908 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-cni-netd\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416932 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-run-systemd\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416952 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-systemd-units\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416967 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-ovn-node-metrics-cert\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.416987 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-etc-openvswitch\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.417005 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-ovnkube-config\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.417804 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-node-log\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.417909 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-run-ovn-kubernetes\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.417948 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-ovnkube-config\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.418024 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-kubelet\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.418064 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.418087 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-cni-bin\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.418110 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-run-ovn\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.418137 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-cni-netd\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.418157 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-run-systemd\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.418181 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-systemd-units\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.418823 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-ovnkube-script-lib\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.418892 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-run-netns\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.418925 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-run-openvswitch\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.418959 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-log-socket\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.418994 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-host-slash\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.419022 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-var-lib-openvswitch\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.419173 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-etc-openvswitch\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.419428 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-env-overrides\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.421196 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-ovn-node-metrics-cert\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.440514 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n2mr\" (UniqueName: \"kubernetes.io/projected/df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0-kube-api-access-2n2mr\") pod \"ovnkube-node-6mz9z\" (UID: \"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.589093 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.943961 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovn-acl-logging/0.log" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.945290 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsq5s_318b13d4-6c61-4b45-bb2f-0a7e243946a6/ovn-controller/0.log" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.945713 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" event={"ID":"318b13d4-6c61-4b45-bb2f-0a7e243946a6","Type":"ContainerDied","Data":"5293548c0572094578227a0ec41195afe36c5f33f902c239464c1c636a22211b"} Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.945773 4678 scope.go:117] "RemoveContainer" containerID="0a974bbe7632470d424b26235d56421761fefeb71b2355e01b646decde9d5693" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.946013 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zsq5s" Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.948402 4678 generic.go:334] "Generic (PLEG): container finished" podID="df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0" containerID="0e990493d958637abdbd2e341312e462d23ba1da001b31834ba964ef286c9597" exitCode=0 Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.948474 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" event={"ID":"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0","Type":"ContainerDied","Data":"0e990493d958637abdbd2e341312e462d23ba1da001b31834ba964ef286c9597"} Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.948503 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" event={"ID":"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0","Type":"ContainerStarted","Data":"62b646ac03c63363317aca75a5bd7ef64a6133f48eb14640f7dae0339f8ab02a"} Nov 24 11:26:11 crc kubenswrapper[4678]: I1124 11:26:11.978178 4678 scope.go:117] "RemoveContainer" containerID="acd85c24ede40cf2126acdea404244e254a6a1bcdf554cf45dd23db2f54a72e0" Nov 24 11:26:12 crc kubenswrapper[4678]: I1124 11:26:12.015534 4678 scope.go:117] "RemoveContainer" containerID="ca026291a78163be3c8f5f9507c5143df1df82c288adf97b685b0d6d482a97e0" Nov 24 11:26:12 crc kubenswrapper[4678]: I1124 11:26:12.040004 4678 scope.go:117] "RemoveContainer" containerID="634c544553190d1be415c048aac4de7ece5643238c558445473d3b5eec2343f9" Nov 24 11:26:12 crc kubenswrapper[4678]: I1124 11:26:12.057494 4678 scope.go:117] "RemoveContainer" containerID="498e7eca3868f7a972bf4b880e5da471dadefa15ebb24bbd053b4c473e12c5c6" Nov 24 11:26:12 crc kubenswrapper[4678]: I1124 11:26:12.073745 4678 scope.go:117] "RemoveContainer" containerID="c5f24433fb24226fb5cf48bac66fa320e54707f1034bda0681b7bcb0b5125859" Nov 24 11:26:12 crc kubenswrapper[4678]: I1124 11:26:12.091273 4678 scope.go:117] "RemoveContainer" containerID="09b79919781f1f967ca7578765531e312f925e258884d18a4b5f3d2f32b240a6" Nov 24 11:26:12 crc kubenswrapper[4678]: I1124 11:26:12.113483 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zsq5s"] Nov 24 11:26:12 crc kubenswrapper[4678]: I1124 11:26:12.117624 4678 scope.go:117] "RemoveContainer" containerID="aa15184f21f24405fafbbade2d0fab8939999a0f512ef2bdcdeaaea436c49631" Nov 24 11:26:12 crc kubenswrapper[4678]: I1124 11:26:12.124876 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zsq5s"] Nov 24 11:26:12 crc kubenswrapper[4678]: I1124 11:26:12.139585 4678 scope.go:117] "RemoveContainer" containerID="82bb2466d5687e3c598c8e28da5ad1862e9567c063e5f854ff66af04960335a6" Nov 24 11:26:12 crc kubenswrapper[4678]: I1124 11:26:12.960680 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" event={"ID":"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0","Type":"ContainerStarted","Data":"30680578cfce9d345aad0d1d510487566ab328e0ee20a1eb81e1a421e5f4aa0f"} Nov 24 11:26:12 crc kubenswrapper[4678]: I1124 11:26:12.961023 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" event={"ID":"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0","Type":"ContainerStarted","Data":"3d12d5896ba47c3b7090687bc855869f837b518a0033ec71ca868e77dff00ebf"} Nov 24 11:26:12 crc kubenswrapper[4678]: I1124 11:26:12.961036 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" event={"ID":"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0","Type":"ContainerStarted","Data":"594376ea94e745b2bace2ceb779821ea7b8dafcdb2944ad3a5fb2dd9696f2229"} Nov 24 11:26:12 crc kubenswrapper[4678]: I1124 11:26:12.961045 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" event={"ID":"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0","Type":"ContainerStarted","Data":"a4181d39f081e865c0c927701e8e735b2dd9c0f954ca394b1ca0d92ec32c266c"} Nov 24 11:26:12 crc kubenswrapper[4678]: I1124 11:26:12.961054 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" event={"ID":"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0","Type":"ContainerStarted","Data":"f17a4c40c7de7cff951b0ddafd7f5fa193cd453611054f3c8c79ab6d3e298d9e"} Nov 24 11:26:13 crc kubenswrapper[4678]: I1124 11:26:13.902858 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="318b13d4-6c61-4b45-bb2f-0a7e243946a6" path="/var/lib/kubelet/pods/318b13d4-6c61-4b45-bb2f-0a7e243946a6/volumes" Nov 24 11:26:13 crc kubenswrapper[4678]: I1124 11:26:13.969330 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" event={"ID":"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0","Type":"ContainerStarted","Data":"f61bd8f348c9cebabfcefe188780050b8ea71b509226e332cd583989f44886e0"} Nov 24 11:26:15 crc kubenswrapper[4678]: I1124 11:26:15.788058 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs"] Nov 24 11:26:15 crc kubenswrapper[4678]: I1124 11:26:15.791807 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:15 crc kubenswrapper[4678]: I1124 11:26:15.793859 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-j8mhf" Nov 24 11:26:15 crc kubenswrapper[4678]: I1124 11:26:15.794397 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 24 11:26:15 crc kubenswrapper[4678]: I1124 11:26:15.794661 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 24 11:26:15 crc kubenswrapper[4678]: I1124 11:26:15.896518 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn98b\" (UniqueName: \"kubernetes.io/projected/33f972c9-5774-4097-b3fd-a0adcf7f812d-kube-api-access-cn98b\") pod \"obo-prometheus-operator-668cf9dfbb-vp9fs\" (UID: \"33f972c9-5774-4097-b3fd-a0adcf7f812d\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:15 crc kubenswrapper[4678]: I1124 11:26:15.919845 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8"] Nov 24 11:26:15 crc kubenswrapper[4678]: I1124 11:26:15.920733 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:15 crc kubenswrapper[4678]: I1124 11:26:15.923434 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 24 11:26:15 crc kubenswrapper[4678]: I1124 11:26:15.924780 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-8tv5k" Nov 24 11:26:15 crc kubenswrapper[4678]: I1124 11:26:15.935573 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn"] Nov 24 11:26:15 crc kubenswrapper[4678]: I1124 11:26:15.936470 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:15 crc kubenswrapper[4678]: I1124 11:26:15.983478 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" event={"ID":"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0","Type":"ContainerStarted","Data":"c297cd4cddab68c716fb95b723291756b51e353cb2951fa65a689bf0dd76394e"} Nov 24 11:26:15 crc kubenswrapper[4678]: I1124 11:26:15.997459 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn98b\" (UniqueName: \"kubernetes.io/projected/33f972c9-5774-4097-b3fd-a0adcf7f812d-kube-api-access-cn98b\") pod \"obo-prometheus-operator-668cf9dfbb-vp9fs\" (UID: \"33f972c9-5774-4097-b3fd-a0adcf7f812d\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.017270 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn98b\" (UniqueName: \"kubernetes.io/projected/33f972c9-5774-4097-b3fd-a0adcf7f812d-kube-api-access-cn98b\") pod \"obo-prometheus-operator-668cf9dfbb-vp9fs\" (UID: \"33f972c9-5774-4097-b3fd-a0adcf7f812d\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.098708 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e2619d2-61fe-46e6-bd91-b9b2e2ab594d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn\" (UID: \"9e2619d2-61fe-46e6-bd91-b9b2e2ab594d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.098790 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b704215d-9f17-49e2-9bed-f17a2b0388b1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8\" (UID: \"b704215d-9f17-49e2-9bed-f17a2b0388b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.098822 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b704215d-9f17-49e2-9bed-f17a2b0388b1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8\" (UID: \"b704215d-9f17-49e2-9bed-f17a2b0388b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.098868 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e2619d2-61fe-46e6-bd91-b9b2e2ab594d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn\" (UID: \"9e2619d2-61fe-46e6-bd91-b9b2e2ab594d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.110025 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.115448 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-tx7v7"] Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.116597 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.119098 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-wx92q" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.119585 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.158520 4678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators_33f972c9-5774-4097-b3fd-a0adcf7f812d_0(d449a3dfce12d619f213aa19e75763c8230cba8173e82ceb34026896957be1d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.158613 4678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators_33f972c9-5774-4097-b3fd-a0adcf7f812d_0(d449a3dfce12d619f213aa19e75763c8230cba8173e82ceb34026896957be1d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.158643 4678 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators_33f972c9-5774-4097-b3fd-a0adcf7f812d_0(d449a3dfce12d619f213aa19e75763c8230cba8173e82ceb34026896957be1d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.158720 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators(33f972c9-5774-4097-b3fd-a0adcf7f812d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators(33f972c9-5774-4097-b3fd-a0adcf7f812d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators_33f972c9-5774-4097-b3fd-a0adcf7f812d_0(d449a3dfce12d619f213aa19e75763c8230cba8173e82ceb34026896957be1d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" podUID="33f972c9-5774-4097-b3fd-a0adcf7f812d" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.200569 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e2619d2-61fe-46e6-bd91-b9b2e2ab594d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn\" (UID: \"9e2619d2-61fe-46e6-bd91-b9b2e2ab594d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.200658 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b704215d-9f17-49e2-9bed-f17a2b0388b1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8\" (UID: \"b704215d-9f17-49e2-9bed-f17a2b0388b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.200718 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b704215d-9f17-49e2-9bed-f17a2b0388b1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8\" (UID: \"b704215d-9f17-49e2-9bed-f17a2b0388b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.200780 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e2619d2-61fe-46e6-bd91-b9b2e2ab594d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn\" (UID: \"9e2619d2-61fe-46e6-bd91-b9b2e2ab594d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.206271 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b704215d-9f17-49e2-9bed-f17a2b0388b1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8\" (UID: \"b704215d-9f17-49e2-9bed-f17a2b0388b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.206648 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b704215d-9f17-49e2-9bed-f17a2b0388b1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8\" (UID: \"b704215d-9f17-49e2-9bed-f17a2b0388b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.207052 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e2619d2-61fe-46e6-bd91-b9b2e2ab594d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn\" (UID: \"9e2619d2-61fe-46e6-bd91-b9b2e2ab594d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.215189 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e2619d2-61fe-46e6-bd91-b9b2e2ab594d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn\" (UID: \"9e2619d2-61fe-46e6-bd91-b9b2e2ab594d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.242131 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.258515 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.279275 4678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators_b704215d-9f17-49e2-9bed-f17a2b0388b1_0(71cd707ddfb0cb7a2060ee30309cbd916f59dfb691299041f2bc00690ecb53b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.279363 4678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators_b704215d-9f17-49e2-9bed-f17a2b0388b1_0(71cd707ddfb0cb7a2060ee30309cbd916f59dfb691299041f2bc00690ecb53b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.279391 4678 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators_b704215d-9f17-49e2-9bed-f17a2b0388b1_0(71cd707ddfb0cb7a2060ee30309cbd916f59dfb691299041f2bc00690ecb53b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.279445 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators(b704215d-9f17-49e2-9bed-f17a2b0388b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators(b704215d-9f17-49e2-9bed-f17a2b0388b1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators_b704215d-9f17-49e2-9bed-f17a2b0388b1_0(71cd707ddfb0cb7a2060ee30309cbd916f59dfb691299041f2bc00690ecb53b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" podUID="b704215d-9f17-49e2-9bed-f17a2b0388b1" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.285320 4678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators_9e2619d2-61fe-46e6-bd91-b9b2e2ab594d_0(5a59ce0cdbb1afcf86e5fe738314f55c8bc42462b5aab6c75e1c53c06fd169ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.285371 4678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators_9e2619d2-61fe-46e6-bd91-b9b2e2ab594d_0(5a59ce0cdbb1afcf86e5fe738314f55c8bc42462b5aab6c75e1c53c06fd169ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.285395 4678 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators_9e2619d2-61fe-46e6-bd91-b9b2e2ab594d_0(5a59ce0cdbb1afcf86e5fe738314f55c8bc42462b5aab6c75e1c53c06fd169ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.285451 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators(9e2619d2-61fe-46e6-bd91-b9b2e2ab594d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators(9e2619d2-61fe-46e6-bd91-b9b2e2ab594d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators_9e2619d2-61fe-46e6-bd91-b9b2e2ab594d_0(5a59ce0cdbb1afcf86e5fe738314f55c8bc42462b5aab6c75e1c53c06fd169ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" podUID="9e2619d2-61fe-46e6-bd91-b9b2e2ab594d" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.302804 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4c99\" (UniqueName: \"kubernetes.io/projected/33b87251-bed8-4721-8955-feede7c367af-kube-api-access-q4c99\") pod \"observability-operator-d8bb48f5d-tx7v7\" (UID: \"33b87251-bed8-4721-8955-feede7c367af\") " pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.302880 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/33b87251-bed8-4721-8955-feede7c367af-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-tx7v7\" (UID: \"33b87251-bed8-4721-8955-feede7c367af\") " pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.329140 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-qj7c6"] Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.330101 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.334663 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-fqqzw" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.405010 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4c99\" (UniqueName: \"kubernetes.io/projected/33b87251-bed8-4721-8955-feede7c367af-kube-api-access-q4c99\") pod \"observability-operator-d8bb48f5d-tx7v7\" (UID: \"33b87251-bed8-4721-8955-feede7c367af\") " pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.405116 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/33b87251-bed8-4721-8955-feede7c367af-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-tx7v7\" (UID: \"33b87251-bed8-4721-8955-feede7c367af\") " pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.409556 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/33b87251-bed8-4721-8955-feede7c367af-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-tx7v7\" (UID: \"33b87251-bed8-4721-8955-feede7c367af\") " pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.431904 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4c99\" (UniqueName: \"kubernetes.io/projected/33b87251-bed8-4721-8955-feede7c367af-kube-api-access-q4c99\") pod \"observability-operator-d8bb48f5d-tx7v7\" (UID: \"33b87251-bed8-4721-8955-feede7c367af\") " pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.506175 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh8jr\" (UniqueName: \"kubernetes.io/projected/8eac0e32-d08f-46ca-ba1b-9c0178ec130e-kube-api-access-sh8jr\") pod \"perses-operator-5446b9c989-qj7c6\" (UID: \"8eac0e32-d08f-46ca-ba1b-9c0178ec130e\") " pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.506259 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8eac0e32-d08f-46ca-ba1b-9c0178ec130e-openshift-service-ca\") pod \"perses-operator-5446b9c989-qj7c6\" (UID: \"8eac0e32-d08f-46ca-ba1b-9c0178ec130e\") " pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.507470 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.532969 4678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tx7v7_openshift-operators_33b87251-bed8-4721-8955-feede7c367af_0(9c95d7c729f7de9f3b09ed1841333c236a481ae315afaf6a37da039289f1eaaa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.533080 4678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tx7v7_openshift-operators_33b87251-bed8-4721-8955-feede7c367af_0(9c95d7c729f7de9f3b09ed1841333c236a481ae315afaf6a37da039289f1eaaa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.533106 4678 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tx7v7_openshift-operators_33b87251-bed8-4721-8955-feede7c367af_0(9c95d7c729f7de9f3b09ed1841333c236a481ae315afaf6a37da039289f1eaaa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.533166 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-tx7v7_openshift-operators(33b87251-bed8-4721-8955-feede7c367af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-tx7v7_openshift-operators(33b87251-bed8-4721-8955-feede7c367af)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tx7v7_openshift-operators_33b87251-bed8-4721-8955-feede7c367af_0(9c95d7c729f7de9f3b09ed1841333c236a481ae315afaf6a37da039289f1eaaa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" podUID="33b87251-bed8-4721-8955-feede7c367af" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.607854 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh8jr\" (UniqueName: \"kubernetes.io/projected/8eac0e32-d08f-46ca-ba1b-9c0178ec130e-kube-api-access-sh8jr\") pod \"perses-operator-5446b9c989-qj7c6\" (UID: \"8eac0e32-d08f-46ca-ba1b-9c0178ec130e\") " pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.607939 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8eac0e32-d08f-46ca-ba1b-9c0178ec130e-openshift-service-ca\") pod \"perses-operator-5446b9c989-qj7c6\" (UID: \"8eac0e32-d08f-46ca-ba1b-9c0178ec130e\") " pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.609052 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8eac0e32-d08f-46ca-ba1b-9c0178ec130e-openshift-service-ca\") pod \"perses-operator-5446b9c989-qj7c6\" (UID: \"8eac0e32-d08f-46ca-ba1b-9c0178ec130e\") " pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.629328 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh8jr\" (UniqueName: \"kubernetes.io/projected/8eac0e32-d08f-46ca-ba1b-9c0178ec130e-kube-api-access-sh8jr\") pod \"perses-operator-5446b9c989-qj7c6\" (UID: \"8eac0e32-d08f-46ca-ba1b-9c0178ec130e\") " pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:16 crc kubenswrapper[4678]: I1124 11:26:16.643482 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.678095 4678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-qj7c6_openshift-operators_8eac0e32-d08f-46ca-ba1b-9c0178ec130e_0(80bb8b6755a71d217841e8c8512483a1ae22e15308e557f3cfd3973bb2bd7851): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.678176 4678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-qj7c6_openshift-operators_8eac0e32-d08f-46ca-ba1b-9c0178ec130e_0(80bb8b6755a71d217841e8c8512483a1ae22e15308e557f3cfd3973bb2bd7851): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.678204 4678 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-qj7c6_openshift-operators_8eac0e32-d08f-46ca-ba1b-9c0178ec130e_0(80bb8b6755a71d217841e8c8512483a1ae22e15308e557f3cfd3973bb2bd7851): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:16 crc kubenswrapper[4678]: E1124 11:26:16.678272 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-qj7c6_openshift-operators(8eac0e32-d08f-46ca-ba1b-9c0178ec130e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-qj7c6_openshift-operators(8eac0e32-d08f-46ca-ba1b-9c0178ec130e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-qj7c6_openshift-operators_8eac0e32-d08f-46ca-ba1b-9c0178ec130e_0(80bb8b6755a71d217841e8c8512483a1ae22e15308e557f3cfd3973bb2bd7851): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" podUID="8eac0e32-d08f-46ca-ba1b-9c0178ec130e" Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.001614 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" event={"ID":"df3c1290-a0d1-43a6-ab11-c8cb9cbf82f0","Type":"ContainerStarted","Data":"04e0bb61f7d0f37c28c1382e5d065cbc4346a2fc6b3c9ba3addd230e30b2e8b0"} Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.003435 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.004759 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.004778 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.036985 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.040054 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.045772 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" podStartSLOduration=7.045750129 podStartE2EDuration="7.045750129s" podCreationTimestamp="2025-11-24 11:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:26:18.041680001 +0000 UTC m=+588.972739630" watchObservedRunningTime="2025-11-24 11:26:18.045750129 +0000 UTC m=+588.976809768" Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.696078 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn"] Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.696255 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.696902 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.719168 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-tx7v7"] Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.719336 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.727431 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.730513 4678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators_9e2619d2-61fe-46e6-bd91-b9b2e2ab594d_0(1383d79b098e71a66d20990981c9cc7a3af293e3ae018a40e9601dd02043e24b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.730584 4678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators_9e2619d2-61fe-46e6-bd91-b9b2e2ab594d_0(1383d79b098e71a66d20990981c9cc7a3af293e3ae018a40e9601dd02043e24b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.730632 4678 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators_9e2619d2-61fe-46e6-bd91-b9b2e2ab594d_0(1383d79b098e71a66d20990981c9cc7a3af293e3ae018a40e9601dd02043e24b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.730705 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators(9e2619d2-61fe-46e6-bd91-b9b2e2ab594d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators(9e2619d2-61fe-46e6-bd91-b9b2e2ab594d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators_9e2619d2-61fe-46e6-bd91-b9b2e2ab594d_0(1383d79b098e71a66d20990981c9cc7a3af293e3ae018a40e9601dd02043e24b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" podUID="9e2619d2-61fe-46e6-bd91-b9b2e2ab594d" Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.764400 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs"] Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.764642 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.765300 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.770483 4678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tx7v7_openshift-operators_33b87251-bed8-4721-8955-feede7c367af_0(349c75c6f90a081839ee122e0907146dd3f282b27913cb84f29e79adcbe0554c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.770551 4678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tx7v7_openshift-operators_33b87251-bed8-4721-8955-feede7c367af_0(349c75c6f90a081839ee122e0907146dd3f282b27913cb84f29e79adcbe0554c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.770578 4678 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tx7v7_openshift-operators_33b87251-bed8-4721-8955-feede7c367af_0(349c75c6f90a081839ee122e0907146dd3f282b27913cb84f29e79adcbe0554c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.770630 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-tx7v7_openshift-operators(33b87251-bed8-4721-8955-feede7c367af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-tx7v7_openshift-operators(33b87251-bed8-4721-8955-feede7c367af)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tx7v7_openshift-operators_33b87251-bed8-4721-8955-feede7c367af_0(349c75c6f90a081839ee122e0907146dd3f282b27913cb84f29e79adcbe0554c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" podUID="33b87251-bed8-4721-8955-feede7c367af" Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.789104 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8"] Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.789319 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.790044 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.812452 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-qj7c6"] Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.812624 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:18 crc kubenswrapper[4678]: I1124 11:26:18.820618 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.841539 4678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators_b704215d-9f17-49e2-9bed-f17a2b0388b1_0(3b4d31a42b1f6feebd347ee030826f384f8e94f7fe27d5db3a023ea37dc65650): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.841620 4678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators_b704215d-9f17-49e2-9bed-f17a2b0388b1_0(3b4d31a42b1f6feebd347ee030826f384f8e94f7fe27d5db3a023ea37dc65650): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.841661 4678 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators_b704215d-9f17-49e2-9bed-f17a2b0388b1_0(3b4d31a42b1f6feebd347ee030826f384f8e94f7fe27d5db3a023ea37dc65650): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.841732 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators(b704215d-9f17-49e2-9bed-f17a2b0388b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators(b704215d-9f17-49e2-9bed-f17a2b0388b1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators_b704215d-9f17-49e2-9bed-f17a2b0388b1_0(3b4d31a42b1f6feebd347ee030826f384f8e94f7fe27d5db3a023ea37dc65650): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" podUID="b704215d-9f17-49e2-9bed-f17a2b0388b1" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.848893 4678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators_33f972c9-5774-4097-b3fd-a0adcf7f812d_0(5cd1a37400c10627c01834fd53aa7f75ef2fd7eac29a7bc8b1ef2030ea338639): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.848985 4678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators_33f972c9-5774-4097-b3fd-a0adcf7f812d_0(5cd1a37400c10627c01834fd53aa7f75ef2fd7eac29a7bc8b1ef2030ea338639): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.849017 4678 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators_33f972c9-5774-4097-b3fd-a0adcf7f812d_0(5cd1a37400c10627c01834fd53aa7f75ef2fd7eac29a7bc8b1ef2030ea338639): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.849079 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators(33f972c9-5774-4097-b3fd-a0adcf7f812d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators(33f972c9-5774-4097-b3fd-a0adcf7f812d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators_33f972c9-5774-4097-b3fd-a0adcf7f812d_0(5cd1a37400c10627c01834fd53aa7f75ef2fd7eac29a7bc8b1ef2030ea338639): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" podUID="33f972c9-5774-4097-b3fd-a0adcf7f812d" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.875074 4678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-qj7c6_openshift-operators_8eac0e32-d08f-46ca-ba1b-9c0178ec130e_0(6105bbcc4fcf4304c1463d20110cbf4f63e2123eb32681811ae2d6ade5ad9077): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.875164 4678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-qj7c6_openshift-operators_8eac0e32-d08f-46ca-ba1b-9c0178ec130e_0(6105bbcc4fcf4304c1463d20110cbf4f63e2123eb32681811ae2d6ade5ad9077): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.875192 4678 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-qj7c6_openshift-operators_8eac0e32-d08f-46ca-ba1b-9c0178ec130e_0(6105bbcc4fcf4304c1463d20110cbf4f63e2123eb32681811ae2d6ade5ad9077): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:18 crc kubenswrapper[4678]: E1124 11:26:18.875251 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-qj7c6_openshift-operators(8eac0e32-d08f-46ca-ba1b-9c0178ec130e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-qj7c6_openshift-operators(8eac0e32-d08f-46ca-ba1b-9c0178ec130e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-qj7c6_openshift-operators_8eac0e32-d08f-46ca-ba1b-9c0178ec130e_0(6105bbcc4fcf4304c1463d20110cbf4f63e2123eb32681811ae2d6ade5ad9077): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" podUID="8eac0e32-d08f-46ca-ba1b-9c0178ec130e" Nov 24 11:26:20 crc kubenswrapper[4678]: I1124 11:26:20.895758 4678 scope.go:117] "RemoveContainer" containerID="8bab327ee33ef6b6764f09a9c29750d42a06fb26d0580431da74c25580a9d952" Nov 24 11:26:20 crc kubenswrapper[4678]: E1124 11:26:20.896423 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-h24xv_openshift-multus(f159c812-75d9-4ad6-9e20-4d208ffe42fb)\"" pod="openshift-multus/multus-h24xv" podUID="f159c812-75d9-4ad6-9e20-4d208ffe42fb" Nov 24 11:26:30 crc kubenswrapper[4678]: I1124 11:26:30.895166 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:30 crc kubenswrapper[4678]: I1124 11:26:30.896477 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:30 crc kubenswrapper[4678]: E1124 11:26:30.941288 4678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tx7v7_openshift-operators_33b87251-bed8-4721-8955-feede7c367af_0(797304ebb1108af50ccc464da47326b0717cced20fe8be20524dc6b0aad54419): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:26:30 crc kubenswrapper[4678]: E1124 11:26:30.941551 4678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tx7v7_openshift-operators_33b87251-bed8-4721-8955-feede7c367af_0(797304ebb1108af50ccc464da47326b0717cced20fe8be20524dc6b0aad54419): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:30 crc kubenswrapper[4678]: E1124 11:26:30.941659 4678 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tx7v7_openshift-operators_33b87251-bed8-4721-8955-feede7c367af_0(797304ebb1108af50ccc464da47326b0717cced20fe8be20524dc6b0aad54419): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:30 crc kubenswrapper[4678]: E1124 11:26:30.941874 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-tx7v7_openshift-operators(33b87251-bed8-4721-8955-feede7c367af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-tx7v7_openshift-operators(33b87251-bed8-4721-8955-feede7c367af)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-tx7v7_openshift-operators_33b87251-bed8-4721-8955-feede7c367af_0(797304ebb1108af50ccc464da47326b0717cced20fe8be20524dc6b0aad54419): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" podUID="33b87251-bed8-4721-8955-feede7c367af" Nov 24 11:26:32 crc kubenswrapper[4678]: I1124 11:26:32.895003 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:32 crc kubenswrapper[4678]: I1124 11:26:32.895444 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:32 crc kubenswrapper[4678]: I1124 11:26:32.896605 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:32 crc kubenswrapper[4678]: I1124 11:26:32.896688 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:32 crc kubenswrapper[4678]: I1124 11:26:32.895753 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:32 crc kubenswrapper[4678]: I1124 11:26:32.901847 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:32 crc kubenswrapper[4678]: E1124 11:26:32.988856 4678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators_9e2619d2-61fe-46e6-bd91-b9b2e2ab594d_0(a21b4db2ea3a00c94a2d85a1d8d8c791f79b63db85b838b79a8b62c379843a2c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:26:32 crc kubenswrapper[4678]: E1124 11:26:32.988942 4678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators_9e2619d2-61fe-46e6-bd91-b9b2e2ab594d_0(a21b4db2ea3a00c94a2d85a1d8d8c791f79b63db85b838b79a8b62c379843a2c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:32 crc kubenswrapper[4678]: E1124 11:26:32.988970 4678 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators_9e2619d2-61fe-46e6-bd91-b9b2e2ab594d_0(a21b4db2ea3a00c94a2d85a1d8d8c791f79b63db85b838b79a8b62c379843a2c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:32 crc kubenswrapper[4678]: E1124 11:26:32.989028 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators(9e2619d2-61fe-46e6-bd91-b9b2e2ab594d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators(9e2619d2-61fe-46e6-bd91-b9b2e2ab594d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_openshift-operators_9e2619d2-61fe-46e6-bd91-b9b2e2ab594d_0(a21b4db2ea3a00c94a2d85a1d8d8c791f79b63db85b838b79a8b62c379843a2c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" podUID="9e2619d2-61fe-46e6-bd91-b9b2e2ab594d" Nov 24 11:26:32 crc kubenswrapper[4678]: E1124 11:26:32.995562 4678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators_b704215d-9f17-49e2-9bed-f17a2b0388b1_0(fcc84bd9a62122975cc9fb3d233fa2121db2b40c4a0a9d544f8c25a47e6e5f58): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:26:32 crc kubenswrapper[4678]: E1124 11:26:32.995606 4678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators_b704215d-9f17-49e2-9bed-f17a2b0388b1_0(fcc84bd9a62122975cc9fb3d233fa2121db2b40c4a0a9d544f8c25a47e6e5f58): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:32 crc kubenswrapper[4678]: E1124 11:26:32.995626 4678 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators_b704215d-9f17-49e2-9bed-f17a2b0388b1_0(fcc84bd9a62122975cc9fb3d233fa2121db2b40c4a0a9d544f8c25a47e6e5f58): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:32 crc kubenswrapper[4678]: E1124 11:26:32.995661 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators(b704215d-9f17-49e2-9bed-f17a2b0388b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators(b704215d-9f17-49e2-9bed-f17a2b0388b1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_openshift-operators_b704215d-9f17-49e2-9bed-f17a2b0388b1_0(fcc84bd9a62122975cc9fb3d233fa2121db2b40c4a0a9d544f8c25a47e6e5f58): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" podUID="b704215d-9f17-49e2-9bed-f17a2b0388b1" Nov 24 11:26:33 crc kubenswrapper[4678]: E1124 11:26:33.001776 4678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-qj7c6_openshift-operators_8eac0e32-d08f-46ca-ba1b-9c0178ec130e_0(050e106323ee92290e1b3fe727690fbdbe2cf95d0a6d4549ae2315106c21071b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:26:33 crc kubenswrapper[4678]: E1124 11:26:33.001864 4678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-qj7c6_openshift-operators_8eac0e32-d08f-46ca-ba1b-9c0178ec130e_0(050e106323ee92290e1b3fe727690fbdbe2cf95d0a6d4549ae2315106c21071b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:33 crc kubenswrapper[4678]: E1124 11:26:33.001887 4678 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-qj7c6_openshift-operators_8eac0e32-d08f-46ca-ba1b-9c0178ec130e_0(050e106323ee92290e1b3fe727690fbdbe2cf95d0a6d4549ae2315106c21071b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:33 crc kubenswrapper[4678]: E1124 11:26:33.001948 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-qj7c6_openshift-operators(8eac0e32-d08f-46ca-ba1b-9c0178ec130e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-qj7c6_openshift-operators(8eac0e32-d08f-46ca-ba1b-9c0178ec130e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-qj7c6_openshift-operators_8eac0e32-d08f-46ca-ba1b-9c0178ec130e_0(050e106323ee92290e1b3fe727690fbdbe2cf95d0a6d4549ae2315106c21071b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" podUID="8eac0e32-d08f-46ca-ba1b-9c0178ec130e" Nov 24 11:26:33 crc kubenswrapper[4678]: I1124 11:26:33.895351 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:33 crc kubenswrapper[4678]: I1124 11:26:33.896154 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:33 crc kubenswrapper[4678]: E1124 11:26:33.930084 4678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators_33f972c9-5774-4097-b3fd-a0adcf7f812d_0(53b3f866a8b18f7dc35e3e06b0e603ee2409533d6ac1f0a122bd9403f07990b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 11:26:33 crc kubenswrapper[4678]: E1124 11:26:33.930508 4678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators_33f972c9-5774-4097-b3fd-a0adcf7f812d_0(53b3f866a8b18f7dc35e3e06b0e603ee2409533d6ac1f0a122bd9403f07990b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:33 crc kubenswrapper[4678]: E1124 11:26:33.930536 4678 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators_33f972c9-5774-4097-b3fd-a0adcf7f812d_0(53b3f866a8b18f7dc35e3e06b0e603ee2409533d6ac1f0a122bd9403f07990b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:33 crc kubenswrapper[4678]: E1124 11:26:33.930586 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators(33f972c9-5774-4097-b3fd-a0adcf7f812d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators(33f972c9-5774-4097-b3fd-a0adcf7f812d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-vp9fs_openshift-operators_33f972c9-5774-4097-b3fd-a0adcf7f812d_0(53b3f866a8b18f7dc35e3e06b0e603ee2409533d6ac1f0a122bd9403f07990b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" podUID="33f972c9-5774-4097-b3fd-a0adcf7f812d" Nov 24 11:26:35 crc kubenswrapper[4678]: I1124 11:26:35.895356 4678 scope.go:117] "RemoveContainer" containerID="8bab327ee33ef6b6764f09a9c29750d42a06fb26d0580431da74c25580a9d952" Nov 24 11:26:36 crc kubenswrapper[4678]: I1124 11:26:36.133048 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h24xv_f159c812-75d9-4ad6-9e20-4d208ffe42fb/kube-multus/2.log" Nov 24 11:26:36 crc kubenswrapper[4678]: I1124 11:26:36.133469 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h24xv" event={"ID":"f159c812-75d9-4ad6-9e20-4d208ffe42fb","Type":"ContainerStarted","Data":"91f52980fa490f619d0748853bea95b56a6cac1479bcc7e20503a64c44e443de"} Nov 24 11:26:41 crc kubenswrapper[4678]: I1124 11:26:41.628795 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6mz9z" Nov 24 11:26:43 crc kubenswrapper[4678]: I1124 11:26:43.894757 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:43 crc kubenswrapper[4678]: I1124 11:26:43.895464 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:44 crc kubenswrapper[4678]: I1124 11:26:44.221884 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-tx7v7"] Nov 24 11:26:44 crc kubenswrapper[4678]: W1124 11:26:44.228278 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33b87251_bed8_4721_8955_feede7c367af.slice/crio-e820b724c21b4f6513494b6e6a7f7a9fc947a7b9393cbc41da281dfc976ecd12 WatchSource:0}: Error finding container e820b724c21b4f6513494b6e6a7f7a9fc947a7b9393cbc41da281dfc976ecd12: Status 404 returned error can't find the container with id e820b724c21b4f6513494b6e6a7f7a9fc947a7b9393cbc41da281dfc976ecd12 Nov 24 11:26:44 crc kubenswrapper[4678]: I1124 11:26:44.895454 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:44 crc kubenswrapper[4678]: I1124 11:26:44.896064 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" Nov 24 11:26:44 crc kubenswrapper[4678]: I1124 11:26:44.896520 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:44 crc kubenswrapper[4678]: I1124 11:26:44.896726 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" Nov 24 11:26:45 crc kubenswrapper[4678]: I1124 11:26:45.203851 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" event={"ID":"33b87251-bed8-4721-8955-feede7c367af","Type":"ContainerStarted","Data":"e820b724c21b4f6513494b6e6a7f7a9fc947a7b9393cbc41da281dfc976ecd12"} Nov 24 11:26:45 crc kubenswrapper[4678]: I1124 11:26:45.275170 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs"] Nov 24 11:26:45 crc kubenswrapper[4678]: I1124 11:26:45.355106 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8"] Nov 24 11:26:45 crc kubenswrapper[4678]: W1124 11:26:45.368324 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb704215d_9f17_49e2_9bed_f17a2b0388b1.slice/crio-068c9fcd164b2f4725e3946a781aa1f274ee26e22d6b17bc28f25eea708bbc36 WatchSource:0}: Error finding container 068c9fcd164b2f4725e3946a781aa1f274ee26e22d6b17bc28f25eea708bbc36: Status 404 returned error can't find the container with id 068c9fcd164b2f4725e3946a781aa1f274ee26e22d6b17bc28f25eea708bbc36 Nov 24 11:26:45 crc kubenswrapper[4678]: I1124 11:26:45.895101 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:45 crc kubenswrapper[4678]: I1124 11:26:45.895886 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:46 crc kubenswrapper[4678]: I1124 11:26:46.167749 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-qj7c6"] Nov 24 11:26:46 crc kubenswrapper[4678]: W1124 11:26:46.176073 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eac0e32_d08f_46ca_ba1b_9c0178ec130e.slice/crio-f9efeea46a1c4e364a92a09dace9665aed08f656b64a5acd6f644ed8faa92ab4 WatchSource:0}: Error finding container f9efeea46a1c4e364a92a09dace9665aed08f656b64a5acd6f644ed8faa92ab4: Status 404 returned error can't find the container with id f9efeea46a1c4e364a92a09dace9665aed08f656b64a5acd6f644ed8faa92ab4 Nov 24 11:26:46 crc kubenswrapper[4678]: I1124 11:26:46.213260 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" event={"ID":"33f972c9-5774-4097-b3fd-a0adcf7f812d","Type":"ContainerStarted","Data":"0ea78b0ad730c7214cdc3f952201166b98420d5bc0c37497bc164365ebf27288"} Nov 24 11:26:46 crc kubenswrapper[4678]: I1124 11:26:46.214622 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" event={"ID":"8eac0e32-d08f-46ca-ba1b-9c0178ec130e","Type":"ContainerStarted","Data":"f9efeea46a1c4e364a92a09dace9665aed08f656b64a5acd6f644ed8faa92ab4"} Nov 24 11:26:46 crc kubenswrapper[4678]: I1124 11:26:46.216084 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" event={"ID":"b704215d-9f17-49e2-9bed-f17a2b0388b1","Type":"ContainerStarted","Data":"068c9fcd164b2f4725e3946a781aa1f274ee26e22d6b17bc28f25eea708bbc36"} Nov 24 11:26:47 crc kubenswrapper[4678]: I1124 11:26:47.894751 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:47 crc kubenswrapper[4678]: I1124 11:26:47.895792 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" Nov 24 11:26:57 crc kubenswrapper[4678]: I1124 11:26:57.826041 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn"] Nov 24 11:26:57 crc kubenswrapper[4678]: W1124 11:26:57.834055 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e2619d2_61fe_46e6_bd91_b9b2e2ab594d.slice/crio-e99e8855f1095d8d8e08b18a8d374c8193ad84b8db83e9dacaec441ad9dd3597 WatchSource:0}: Error finding container e99e8855f1095d8d8e08b18a8d374c8193ad84b8db83e9dacaec441ad9dd3597: Status 404 returned error can't find the container with id e99e8855f1095d8d8e08b18a8d374c8193ad84b8db83e9dacaec441ad9dd3597 Nov 24 11:26:58 crc kubenswrapper[4678]: I1124 11:26:58.313707 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" event={"ID":"9e2619d2-61fe-46e6-bd91-b9b2e2ab594d","Type":"ContainerStarted","Data":"fe922b19bda55b573ab6be1c33ecf2481cd79bd389da70f3df27fb9a3628c81f"} Nov 24 11:26:58 crc kubenswrapper[4678]: I1124 11:26:58.313775 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" event={"ID":"9e2619d2-61fe-46e6-bd91-b9b2e2ab594d","Type":"ContainerStarted","Data":"e99e8855f1095d8d8e08b18a8d374c8193ad84b8db83e9dacaec441ad9dd3597"} Nov 24 11:26:58 crc kubenswrapper[4678]: I1124 11:26:58.315532 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" event={"ID":"33b87251-bed8-4721-8955-feede7c367af","Type":"ContainerStarted","Data":"641a6d2dcf86cefbb8cddd7d104f82e2545c68293e8fd7d0b0055f038c115940"} Nov 24 11:26:58 crc kubenswrapper[4678]: I1124 11:26:58.315663 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:58 crc kubenswrapper[4678]: I1124 11:26:58.317547 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" event={"ID":"33f972c9-5774-4097-b3fd-a0adcf7f812d","Type":"ContainerStarted","Data":"3373fd55834fbe245cf52d4e075446d3b05026db518767994ec07e0bfce2286b"} Nov 24 11:26:58 crc kubenswrapper[4678]: I1124 11:26:58.319531 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" event={"ID":"8eac0e32-d08f-46ca-ba1b-9c0178ec130e","Type":"ContainerStarted","Data":"e8a46f0faf043631ab99d44d81099a5f9d9026ac17fe1b1bdd9dce8ba974df14"} Nov 24 11:26:58 crc kubenswrapper[4678]: I1124 11:26:58.319659 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:26:58 crc kubenswrapper[4678]: I1124 11:26:58.321224 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" event={"ID":"b704215d-9f17-49e2-9bed-f17a2b0388b1","Type":"ContainerStarted","Data":"e9c30d4ade24840c9a14d19e61863ef4608b7c696e0a30f1666e62e87104542b"} Nov 24 11:26:58 crc kubenswrapper[4678]: I1124 11:26:58.334534 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn" podStartSLOduration=43.334513647 podStartE2EDuration="43.334513647s" podCreationTimestamp="2025-11-24 11:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:26:58.331787925 +0000 UTC m=+629.262847574" watchObservedRunningTime="2025-11-24 11:26:58.334513647 +0000 UTC m=+629.265573286" Nov 24 11:26:58 crc kubenswrapper[4678]: I1124 11:26:58.365039 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" podStartSLOduration=29.103087117 podStartE2EDuration="42.365016047s" podCreationTimestamp="2025-11-24 11:26:16 +0000 UTC" firstStartedPulling="2025-11-24 11:26:44.231892969 +0000 UTC m=+615.162952608" lastFinishedPulling="2025-11-24 11:26:57.493821889 +0000 UTC m=+628.424881538" observedRunningTime="2025-11-24 11:26:58.363338062 +0000 UTC m=+629.294397721" watchObservedRunningTime="2025-11-24 11:26:58.365016047 +0000 UTC m=+629.296075686" Nov 24 11:26:58 crc kubenswrapper[4678]: I1124 11:26:58.383518 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-tx7v7" Nov 24 11:26:58 crc kubenswrapper[4678]: I1124 11:26:58.399944 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8" podStartSLOduration=31.274071418 podStartE2EDuration="43.399921933s" podCreationTimestamp="2025-11-24 11:26:15 +0000 UTC" firstStartedPulling="2025-11-24 11:26:45.372976498 +0000 UTC m=+616.304036137" lastFinishedPulling="2025-11-24 11:26:57.498827003 +0000 UTC m=+628.429886652" observedRunningTime="2025-11-24 11:26:58.397074848 +0000 UTC m=+629.328134487" watchObservedRunningTime="2025-11-24 11:26:58.399921933 +0000 UTC m=+629.330981582" Nov 24 11:26:58 crc kubenswrapper[4678]: I1124 11:26:58.425177 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vp9fs" podStartSLOduration=31.203897756 podStartE2EDuration="43.425154713s" podCreationTimestamp="2025-11-24 11:26:15 +0000 UTC" firstStartedPulling="2025-11-24 11:26:45.303547355 +0000 UTC m=+616.234606994" lastFinishedPulling="2025-11-24 11:26:57.524804272 +0000 UTC m=+628.455863951" observedRunningTime="2025-11-24 11:26:58.421551147 +0000 UTC m=+629.352610796" watchObservedRunningTime="2025-11-24 11:26:58.425154713 +0000 UTC m=+629.356214352" Nov 24 11:26:58 crc kubenswrapper[4678]: I1124 11:26:58.449902 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" podStartSLOduration=31.10744825 podStartE2EDuration="42.449881218s" podCreationTimestamp="2025-11-24 11:26:16 +0000 UTC" firstStartedPulling="2025-11-24 11:26:46.189520174 +0000 UTC m=+617.120579813" lastFinishedPulling="2025-11-24 11:26:57.531953102 +0000 UTC m=+628.463012781" observedRunningTime="2025-11-24 11:26:58.44202458 +0000 UTC m=+629.373084229" watchObservedRunningTime="2025-11-24 11:26:58.449881218 +0000 UTC m=+629.380940857" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.647165 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-qj7c6" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.704525 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-ff799"] Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.705469 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-ff799" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.714171 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-xh5hn"] Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.715044 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-xh5hn" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.718656 4678 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-tkk89" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.719106 4678 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2cxkq" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.719856 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.720044 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.722063 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-ff799"] Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.747047 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-cf7d2"] Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.748380 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-cf7d2" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.753746 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-xh5hn"] Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.757528 4678 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-tntjl" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.768659 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-cf7d2"] Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.792115 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hcmd\" (UniqueName: \"kubernetes.io/projected/cd465141-2168-436c-a685-2eb559e2bcb8-kube-api-access-5hcmd\") pod \"cert-manager-5b446d88c5-xh5hn\" (UID: \"cd465141-2168-436c-a685-2eb559e2bcb8\") " pod="cert-manager/cert-manager-5b446d88c5-xh5hn" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.792203 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgtnp\" (UniqueName: \"kubernetes.io/projected/a15c5721-1751-4a87-b3ba-e13cefc0153c-kube-api-access-tgtnp\") pod \"cert-manager-cainjector-7f985d654d-ff799\" (UID: \"a15c5721-1751-4a87-b3ba-e13cefc0153c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-ff799" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.894257 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hcmd\" (UniqueName: \"kubernetes.io/projected/cd465141-2168-436c-a685-2eb559e2bcb8-kube-api-access-5hcmd\") pod \"cert-manager-5b446d88c5-xh5hn\" (UID: \"cd465141-2168-436c-a685-2eb559e2bcb8\") " pod="cert-manager/cert-manager-5b446d88c5-xh5hn" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.894348 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5qhb\" (UniqueName: \"kubernetes.io/projected/51188e3b-bda3-4291-b54f-1abb414dd320-kube-api-access-z5qhb\") pod \"cert-manager-webhook-5655c58dd6-cf7d2\" (UID: \"51188e3b-bda3-4291-b54f-1abb414dd320\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-cf7d2" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.894380 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgtnp\" (UniqueName: \"kubernetes.io/projected/a15c5721-1751-4a87-b3ba-e13cefc0153c-kube-api-access-tgtnp\") pod \"cert-manager-cainjector-7f985d654d-ff799\" (UID: \"a15c5721-1751-4a87-b3ba-e13cefc0153c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-ff799" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.915447 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgtnp\" (UniqueName: \"kubernetes.io/projected/a15c5721-1751-4a87-b3ba-e13cefc0153c-kube-api-access-tgtnp\") pod \"cert-manager-cainjector-7f985d654d-ff799\" (UID: \"a15c5721-1751-4a87-b3ba-e13cefc0153c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-ff799" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.917769 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hcmd\" (UniqueName: \"kubernetes.io/projected/cd465141-2168-436c-a685-2eb559e2bcb8-kube-api-access-5hcmd\") pod \"cert-manager-5b446d88c5-xh5hn\" (UID: \"cd465141-2168-436c-a685-2eb559e2bcb8\") " pod="cert-manager/cert-manager-5b446d88c5-xh5hn" Nov 24 11:27:06 crc kubenswrapper[4678]: I1124 11:27:06.995615 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5qhb\" (UniqueName: \"kubernetes.io/projected/51188e3b-bda3-4291-b54f-1abb414dd320-kube-api-access-z5qhb\") pod \"cert-manager-webhook-5655c58dd6-cf7d2\" (UID: \"51188e3b-bda3-4291-b54f-1abb414dd320\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-cf7d2" Nov 24 11:27:07 crc kubenswrapper[4678]: I1124 11:27:07.021065 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5qhb\" (UniqueName: \"kubernetes.io/projected/51188e3b-bda3-4291-b54f-1abb414dd320-kube-api-access-z5qhb\") pod \"cert-manager-webhook-5655c58dd6-cf7d2\" (UID: \"51188e3b-bda3-4291-b54f-1abb414dd320\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-cf7d2" Nov 24 11:27:07 crc kubenswrapper[4678]: I1124 11:27:07.038276 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-ff799" Nov 24 11:27:07 crc kubenswrapper[4678]: I1124 11:27:07.046838 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-xh5hn" Nov 24 11:27:07 crc kubenswrapper[4678]: I1124 11:27:07.063442 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-cf7d2" Nov 24 11:27:07 crc kubenswrapper[4678]: I1124 11:27:07.427738 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-cf7d2"] Nov 24 11:27:07 crc kubenswrapper[4678]: I1124 11:27:07.521492 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-ff799"] Nov 24 11:27:07 crc kubenswrapper[4678]: W1124 11:27:07.530836 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd465141_2168_436c_a685_2eb559e2bcb8.slice/crio-69e8242a23228e72281d584998b45d39d981f746d7e8164385e50caef63ccd8a WatchSource:0}: Error finding container 69e8242a23228e72281d584998b45d39d981f746d7e8164385e50caef63ccd8a: Status 404 returned error can't find the container with id 69e8242a23228e72281d584998b45d39d981f746d7e8164385e50caef63ccd8a Nov 24 11:27:07 crc kubenswrapper[4678]: I1124 11:27:07.534209 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-xh5hn"] Nov 24 11:27:07 crc kubenswrapper[4678]: W1124 11:27:07.547384 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda15c5721_1751_4a87_b3ba_e13cefc0153c.slice/crio-9e8c67a303f1c95a5906fa6f9e1e490ae3788daf2046d67e788e0fd339029717 WatchSource:0}: Error finding container 9e8c67a303f1c95a5906fa6f9e1e490ae3788daf2046d67e788e0fd339029717: Status 404 returned error can't find the container with id 9e8c67a303f1c95a5906fa6f9e1e490ae3788daf2046d67e788e0fd339029717 Nov 24 11:27:08 crc kubenswrapper[4678]: I1124 11:27:08.415789 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-cf7d2" event={"ID":"51188e3b-bda3-4291-b54f-1abb414dd320","Type":"ContainerStarted","Data":"b1057df55376df6a89304658de06b3a840d3b63d6536b4497023d18d685e64cc"} Nov 24 11:27:08 crc kubenswrapper[4678]: I1124 11:27:08.416717 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-ff799" event={"ID":"a15c5721-1751-4a87-b3ba-e13cefc0153c","Type":"ContainerStarted","Data":"9e8c67a303f1c95a5906fa6f9e1e490ae3788daf2046d67e788e0fd339029717"} Nov 24 11:27:08 crc kubenswrapper[4678]: I1124 11:27:08.417411 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-xh5hn" event={"ID":"cd465141-2168-436c-a685-2eb559e2bcb8","Type":"ContainerStarted","Data":"69e8242a23228e72281d584998b45d39d981f746d7e8164385e50caef63ccd8a"} Nov 24 11:27:12 crc kubenswrapper[4678]: I1124 11:27:12.459836 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-cf7d2" event={"ID":"51188e3b-bda3-4291-b54f-1abb414dd320","Type":"ContainerStarted","Data":"d9991bbdfdd74d26022f25325ab9a14435100414a187677948a812d807b707a1"} Nov 24 11:27:12 crc kubenswrapper[4678]: I1124 11:27:12.460395 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-cf7d2" Nov 24 11:27:12 crc kubenswrapper[4678]: I1124 11:27:12.463902 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-ff799" event={"ID":"a15c5721-1751-4a87-b3ba-e13cefc0153c","Type":"ContainerStarted","Data":"c28d4b632a505e279235d5e49f02dd1cbbf47a634a686e80ed205ea2761525d8"} Nov 24 11:27:12 crc kubenswrapper[4678]: I1124 11:27:12.477401 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-cf7d2" podStartSLOduration=2.084961281 podStartE2EDuration="6.477371832s" podCreationTimestamp="2025-11-24 11:27:06 +0000 UTC" firstStartedPulling="2025-11-24 11:27:07.438097797 +0000 UTC m=+638.369157426" lastFinishedPulling="2025-11-24 11:27:11.830508338 +0000 UTC m=+642.761567977" observedRunningTime="2025-11-24 11:27:12.475936304 +0000 UTC m=+643.406995953" watchObservedRunningTime="2025-11-24 11:27:12.477371832 +0000 UTC m=+643.408431461" Nov 24 11:27:12 crc kubenswrapper[4678]: I1124 11:27:12.495146 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-ff799" podStartSLOduration=2.226159699 podStartE2EDuration="6.495118394s" podCreationTimestamp="2025-11-24 11:27:06 +0000 UTC" firstStartedPulling="2025-11-24 11:27:07.560199577 +0000 UTC m=+638.491259206" lastFinishedPulling="2025-11-24 11:27:11.829158262 +0000 UTC m=+642.760217901" observedRunningTime="2025-11-24 11:27:12.492635538 +0000 UTC m=+643.423695197" watchObservedRunningTime="2025-11-24 11:27:12.495118394 +0000 UTC m=+643.426178033" Nov 24 11:27:13 crc kubenswrapper[4678]: I1124 11:27:13.474840 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-xh5hn" event={"ID":"cd465141-2168-436c-a685-2eb559e2bcb8","Type":"ContainerStarted","Data":"e907faecc43ee21812fd4fb4a1dd239bbbb601e7e6bbad87c5deddf42a03baa9"} Nov 24 11:27:13 crc kubenswrapper[4678]: I1124 11:27:13.503150 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-xh5hn" podStartSLOduration=1.9759288499999998 podStartE2EDuration="7.503119901s" podCreationTimestamp="2025-11-24 11:27:06 +0000 UTC" firstStartedPulling="2025-11-24 11:27:07.534635589 +0000 UTC m=+638.465695228" lastFinishedPulling="2025-11-24 11:27:13.06182664 +0000 UTC m=+643.992886279" observedRunningTime="2025-11-24 11:27:13.494520472 +0000 UTC m=+644.425580121" watchObservedRunningTime="2025-11-24 11:27:13.503119901 +0000 UTC m=+644.434179540" Nov 24 11:27:17 crc kubenswrapper[4678]: I1124 11:27:17.068374 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-cf7d2" Nov 24 11:27:40 crc kubenswrapper[4678]: I1124 11:27:40.798971 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc"] Nov 24 11:27:40 crc kubenswrapper[4678]: I1124 11:27:40.801431 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" Nov 24 11:27:40 crc kubenswrapper[4678]: I1124 11:27:40.804226 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 11:27:40 crc kubenswrapper[4678]: I1124 11:27:40.813157 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc"] Nov 24 11:27:40 crc kubenswrapper[4678]: I1124 11:27:40.935891 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpjs8\" (UniqueName: \"kubernetes.io/projected/8249267d-adcb-4ae7-ba3d-438af2982a22-kube-api-access-mpjs8\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc\" (UID: \"8249267d-adcb-4ae7-ba3d-438af2982a22\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" Nov 24 11:27:40 crc kubenswrapper[4678]: I1124 11:27:40.936286 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8249267d-adcb-4ae7-ba3d-438af2982a22-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc\" (UID: \"8249267d-adcb-4ae7-ba3d-438af2982a22\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" Nov 24 11:27:40 crc kubenswrapper[4678]: I1124 11:27:40.936474 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8249267d-adcb-4ae7-ba3d-438af2982a22-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc\" (UID: \"8249267d-adcb-4ae7-ba3d-438af2982a22\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" Nov 24 11:27:41 crc kubenswrapper[4678]: I1124 11:27:41.037580 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpjs8\" (UniqueName: \"kubernetes.io/projected/8249267d-adcb-4ae7-ba3d-438af2982a22-kube-api-access-mpjs8\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc\" (UID: \"8249267d-adcb-4ae7-ba3d-438af2982a22\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" Nov 24 11:27:41 crc kubenswrapper[4678]: I1124 11:27:41.037636 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8249267d-adcb-4ae7-ba3d-438af2982a22-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc\" (UID: \"8249267d-adcb-4ae7-ba3d-438af2982a22\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" Nov 24 11:27:41 crc kubenswrapper[4678]: I1124 11:27:41.037697 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8249267d-adcb-4ae7-ba3d-438af2982a22-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc\" (UID: \"8249267d-adcb-4ae7-ba3d-438af2982a22\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" Nov 24 11:27:41 crc kubenswrapper[4678]: I1124 11:27:41.038249 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8249267d-adcb-4ae7-ba3d-438af2982a22-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc\" (UID: \"8249267d-adcb-4ae7-ba3d-438af2982a22\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" Nov 24 11:27:41 crc kubenswrapper[4678]: I1124 11:27:41.038660 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8249267d-adcb-4ae7-ba3d-438af2982a22-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc\" (UID: \"8249267d-adcb-4ae7-ba3d-438af2982a22\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" Nov 24 11:27:41 crc kubenswrapper[4678]: I1124 11:27:41.069658 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpjs8\" (UniqueName: \"kubernetes.io/projected/8249267d-adcb-4ae7-ba3d-438af2982a22-kube-api-access-mpjs8\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc\" (UID: \"8249267d-adcb-4ae7-ba3d-438af2982a22\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" Nov 24 11:27:41 crc kubenswrapper[4678]: I1124 11:27:41.118343 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" Nov 24 11:27:41 crc kubenswrapper[4678]: I1124 11:27:41.386436 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc"] Nov 24 11:27:41 crc kubenswrapper[4678]: I1124 11:27:41.801714 4678 generic.go:334] "Generic (PLEG): container finished" podID="8249267d-adcb-4ae7-ba3d-438af2982a22" containerID="83bf304c74e32eca01bcf8d141930fbdffc3917050105bc99c39347c9b7f87e0" exitCode=0 Nov 24 11:27:41 crc kubenswrapper[4678]: I1124 11:27:41.801770 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" event={"ID":"8249267d-adcb-4ae7-ba3d-438af2982a22","Type":"ContainerDied","Data":"83bf304c74e32eca01bcf8d141930fbdffc3917050105bc99c39347c9b7f87e0"} Nov 24 11:27:41 crc kubenswrapper[4678]: I1124 11:27:41.801805 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" event={"ID":"8249267d-adcb-4ae7-ba3d-438af2982a22","Type":"ContainerStarted","Data":"93cc149c7b15b1634aa1f701ed35b6bd441a1c976288c484ff216d8b5d778c99"} Nov 24 11:27:43 crc kubenswrapper[4678]: I1124 11:27:43.816362 4678 generic.go:334] "Generic (PLEG): container finished" podID="8249267d-adcb-4ae7-ba3d-438af2982a22" containerID="07e249a85ac59cc9a1d42142eecbadcc07c76b0ccb135a7df6adcea280431176" exitCode=0 Nov 24 11:27:43 crc kubenswrapper[4678]: I1124 11:27:43.816489 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" event={"ID":"8249267d-adcb-4ae7-ba3d-438af2982a22","Type":"ContainerDied","Data":"07e249a85ac59cc9a1d42142eecbadcc07c76b0ccb135a7df6adcea280431176"} Nov 24 11:27:44 crc kubenswrapper[4678]: I1124 11:27:44.826578 4678 generic.go:334] "Generic (PLEG): container finished" podID="8249267d-adcb-4ae7-ba3d-438af2982a22" containerID="925bb51f28cedb6fdbabb6ac31f22fb07165686961859be3fbef19e9a29fb9cc" exitCode=0 Nov 24 11:27:44 crc kubenswrapper[4678]: I1124 11:27:44.826708 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" event={"ID":"8249267d-adcb-4ae7-ba3d-438af2982a22","Type":"ContainerDied","Data":"925bb51f28cedb6fdbabb6ac31f22fb07165686961859be3fbef19e9a29fb9cc"} Nov 24 11:27:46 crc kubenswrapper[4678]: I1124 11:27:46.117824 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" Nov 24 11:27:46 crc kubenswrapper[4678]: I1124 11:27:46.139044 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8249267d-adcb-4ae7-ba3d-438af2982a22-bundle\") pod \"8249267d-adcb-4ae7-ba3d-438af2982a22\" (UID: \"8249267d-adcb-4ae7-ba3d-438af2982a22\") " Nov 24 11:27:46 crc kubenswrapper[4678]: I1124 11:27:46.139265 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpjs8\" (UniqueName: \"kubernetes.io/projected/8249267d-adcb-4ae7-ba3d-438af2982a22-kube-api-access-mpjs8\") pod \"8249267d-adcb-4ae7-ba3d-438af2982a22\" (UID: \"8249267d-adcb-4ae7-ba3d-438af2982a22\") " Nov 24 11:27:46 crc kubenswrapper[4678]: I1124 11:27:46.139294 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8249267d-adcb-4ae7-ba3d-438af2982a22-util\") pod \"8249267d-adcb-4ae7-ba3d-438af2982a22\" (UID: \"8249267d-adcb-4ae7-ba3d-438af2982a22\") " Nov 24 11:27:46 crc kubenswrapper[4678]: I1124 11:27:46.141899 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8249267d-adcb-4ae7-ba3d-438af2982a22-bundle" (OuterVolumeSpecName: "bundle") pod "8249267d-adcb-4ae7-ba3d-438af2982a22" (UID: "8249267d-adcb-4ae7-ba3d-438af2982a22"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:27:46 crc kubenswrapper[4678]: I1124 11:27:46.147968 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8249267d-adcb-4ae7-ba3d-438af2982a22-kube-api-access-mpjs8" (OuterVolumeSpecName: "kube-api-access-mpjs8") pod "8249267d-adcb-4ae7-ba3d-438af2982a22" (UID: "8249267d-adcb-4ae7-ba3d-438af2982a22"). InnerVolumeSpecName "kube-api-access-mpjs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:46 crc kubenswrapper[4678]: I1124 11:27:46.158481 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8249267d-adcb-4ae7-ba3d-438af2982a22-util" (OuterVolumeSpecName: "util") pod "8249267d-adcb-4ae7-ba3d-438af2982a22" (UID: "8249267d-adcb-4ae7-ba3d-438af2982a22"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:27:46 crc kubenswrapper[4678]: I1124 11:27:46.240375 4678 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8249267d-adcb-4ae7-ba3d-438af2982a22-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:46 crc kubenswrapper[4678]: I1124 11:27:46.240412 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpjs8\" (UniqueName: \"kubernetes.io/projected/8249267d-adcb-4ae7-ba3d-438af2982a22-kube-api-access-mpjs8\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:46 crc kubenswrapper[4678]: I1124 11:27:46.240423 4678 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8249267d-adcb-4ae7-ba3d-438af2982a22-util\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:46 crc kubenswrapper[4678]: I1124 11:27:46.864717 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" event={"ID":"8249267d-adcb-4ae7-ba3d-438af2982a22","Type":"ContainerDied","Data":"93cc149c7b15b1634aa1f701ed35b6bd441a1c976288c484ff216d8b5d778c99"} Nov 24 11:27:46 crc kubenswrapper[4678]: I1124 11:27:46.864786 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93cc149c7b15b1634aa1f701ed35b6bd441a1c976288c484ff216d8b5d778c99" Nov 24 11:27:46 crc kubenswrapper[4678]: I1124 11:27:46.864904 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc" Nov 24 11:27:47 crc kubenswrapper[4678]: I1124 11:27:47.971638 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57"] Nov 24 11:27:47 crc kubenswrapper[4678]: E1124 11:27:47.972090 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8249267d-adcb-4ae7-ba3d-438af2982a22" containerName="extract" Nov 24 11:27:47 crc kubenswrapper[4678]: I1124 11:27:47.972117 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="8249267d-adcb-4ae7-ba3d-438af2982a22" containerName="extract" Nov 24 11:27:47 crc kubenswrapper[4678]: E1124 11:27:47.972138 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8249267d-adcb-4ae7-ba3d-438af2982a22" containerName="pull" Nov 24 11:27:47 crc kubenswrapper[4678]: I1124 11:27:47.972149 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="8249267d-adcb-4ae7-ba3d-438af2982a22" containerName="pull" Nov 24 11:27:47 crc kubenswrapper[4678]: E1124 11:27:47.972177 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8249267d-adcb-4ae7-ba3d-438af2982a22" containerName="util" Nov 24 11:27:47 crc kubenswrapper[4678]: I1124 11:27:47.972189 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="8249267d-adcb-4ae7-ba3d-438af2982a22" containerName="util" Nov 24 11:27:47 crc kubenswrapper[4678]: I1124 11:27:47.972400 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="8249267d-adcb-4ae7-ba3d-438af2982a22" containerName="extract" Nov 24 11:27:47 crc kubenswrapper[4678]: I1124 11:27:47.973781 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" Nov 24 11:27:47 crc kubenswrapper[4678]: I1124 11:27:47.975732 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 11:27:47 crc kubenswrapper[4678]: I1124 11:27:47.986989 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57"] Nov 24 11:27:48 crc kubenswrapper[4678]: I1124 11:27:48.072693 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/851f5f66-c12d-4242-aa64-12056f528f46-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57\" (UID: \"851f5f66-c12d-4242-aa64-12056f528f46\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" Nov 24 11:27:48 crc kubenswrapper[4678]: I1124 11:27:48.073220 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/851f5f66-c12d-4242-aa64-12056f528f46-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57\" (UID: \"851f5f66-c12d-4242-aa64-12056f528f46\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" Nov 24 11:27:48 crc kubenswrapper[4678]: I1124 11:27:48.073276 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klsbg\" (UniqueName: \"kubernetes.io/projected/851f5f66-c12d-4242-aa64-12056f528f46-kube-api-access-klsbg\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57\" (UID: \"851f5f66-c12d-4242-aa64-12056f528f46\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" Nov 24 11:27:48 crc kubenswrapper[4678]: I1124 11:27:48.174814 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/851f5f66-c12d-4242-aa64-12056f528f46-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57\" (UID: \"851f5f66-c12d-4242-aa64-12056f528f46\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" Nov 24 11:27:48 crc kubenswrapper[4678]: I1124 11:27:48.174919 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/851f5f66-c12d-4242-aa64-12056f528f46-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57\" (UID: \"851f5f66-c12d-4242-aa64-12056f528f46\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" Nov 24 11:27:48 crc kubenswrapper[4678]: I1124 11:27:48.174948 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klsbg\" (UniqueName: \"kubernetes.io/projected/851f5f66-c12d-4242-aa64-12056f528f46-kube-api-access-klsbg\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57\" (UID: \"851f5f66-c12d-4242-aa64-12056f528f46\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" Nov 24 11:27:48 crc kubenswrapper[4678]: I1124 11:27:48.175650 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/851f5f66-c12d-4242-aa64-12056f528f46-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57\" (UID: \"851f5f66-c12d-4242-aa64-12056f528f46\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" Nov 24 11:27:48 crc kubenswrapper[4678]: I1124 11:27:48.175726 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/851f5f66-c12d-4242-aa64-12056f528f46-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57\" (UID: \"851f5f66-c12d-4242-aa64-12056f528f46\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" Nov 24 11:27:48 crc kubenswrapper[4678]: I1124 11:27:48.194021 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klsbg\" (UniqueName: \"kubernetes.io/projected/851f5f66-c12d-4242-aa64-12056f528f46-kube-api-access-klsbg\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57\" (UID: \"851f5f66-c12d-4242-aa64-12056f528f46\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" Nov 24 11:27:48 crc kubenswrapper[4678]: I1124 11:27:48.311419 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" Nov 24 11:27:48 crc kubenswrapper[4678]: I1124 11:27:48.553910 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57"] Nov 24 11:27:48 crc kubenswrapper[4678]: I1124 11:27:48.892011 4678 generic.go:334] "Generic (PLEG): container finished" podID="851f5f66-c12d-4242-aa64-12056f528f46" containerID="91df2fa2b4a363769abf4d4725d5481a84d3ce17101c61036d44d9c183e50732" exitCode=0 Nov 24 11:27:48 crc kubenswrapper[4678]: I1124 11:27:48.892106 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" event={"ID":"851f5f66-c12d-4242-aa64-12056f528f46","Type":"ContainerDied","Data":"91df2fa2b4a363769abf4d4725d5481a84d3ce17101c61036d44d9c183e50732"} Nov 24 11:27:48 crc kubenswrapper[4678]: I1124 11:27:48.892584 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" event={"ID":"851f5f66-c12d-4242-aa64-12056f528f46","Type":"ContainerStarted","Data":"43f592146ffb7f5902d76ff56c63ec5a1cc3c8f38593c965a774c6c34b995685"} Nov 24 11:27:50 crc kubenswrapper[4678]: I1124 11:27:50.920723 4678 generic.go:334] "Generic (PLEG): container finished" podID="851f5f66-c12d-4242-aa64-12056f528f46" containerID="46f17529170ecd1daa46f2cf2e5b6854d7357a916e1fb015deeb2bbc2fef1598" exitCode=0 Nov 24 11:27:50 crc kubenswrapper[4678]: I1124 11:27:50.920781 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" event={"ID":"851f5f66-c12d-4242-aa64-12056f528f46","Type":"ContainerDied","Data":"46f17529170ecd1daa46f2cf2e5b6854d7357a916e1fb015deeb2bbc2fef1598"} Nov 24 11:27:51 crc kubenswrapper[4678]: I1124 11:27:51.929630 4678 generic.go:334] "Generic (PLEG): container finished" podID="851f5f66-c12d-4242-aa64-12056f528f46" containerID="d00f1ef6cb90ac2ad511d9530ad175e94c2148ca17b933c31aab4b5ec0c87c19" exitCode=0 Nov 24 11:27:51 crc kubenswrapper[4678]: I1124 11:27:51.929740 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" event={"ID":"851f5f66-c12d-4242-aa64-12056f528f46","Type":"ContainerDied","Data":"d00f1ef6cb90ac2ad511d9530ad175e94c2148ca17b933c31aab4b5ec0c87c19"} Nov 24 11:27:53 crc kubenswrapper[4678]: I1124 11:27:53.237717 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" Nov 24 11:27:53 crc kubenswrapper[4678]: I1124 11:27:53.272502 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klsbg\" (UniqueName: \"kubernetes.io/projected/851f5f66-c12d-4242-aa64-12056f528f46-kube-api-access-klsbg\") pod \"851f5f66-c12d-4242-aa64-12056f528f46\" (UID: \"851f5f66-c12d-4242-aa64-12056f528f46\") " Nov 24 11:27:53 crc kubenswrapper[4678]: I1124 11:27:53.272717 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/851f5f66-c12d-4242-aa64-12056f528f46-util\") pod \"851f5f66-c12d-4242-aa64-12056f528f46\" (UID: \"851f5f66-c12d-4242-aa64-12056f528f46\") " Nov 24 11:27:53 crc kubenswrapper[4678]: I1124 11:27:53.272756 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/851f5f66-c12d-4242-aa64-12056f528f46-bundle\") pod \"851f5f66-c12d-4242-aa64-12056f528f46\" (UID: \"851f5f66-c12d-4242-aa64-12056f528f46\") " Nov 24 11:27:53 crc kubenswrapper[4678]: I1124 11:27:53.273979 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851f5f66-c12d-4242-aa64-12056f528f46-bundle" (OuterVolumeSpecName: "bundle") pod "851f5f66-c12d-4242-aa64-12056f528f46" (UID: "851f5f66-c12d-4242-aa64-12056f528f46"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:27:53 crc kubenswrapper[4678]: I1124 11:27:53.283610 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/851f5f66-c12d-4242-aa64-12056f528f46-kube-api-access-klsbg" (OuterVolumeSpecName: "kube-api-access-klsbg") pod "851f5f66-c12d-4242-aa64-12056f528f46" (UID: "851f5f66-c12d-4242-aa64-12056f528f46"). InnerVolumeSpecName "kube-api-access-klsbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:27:53 crc kubenswrapper[4678]: I1124 11:27:53.287748 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851f5f66-c12d-4242-aa64-12056f528f46-util" (OuterVolumeSpecName: "util") pod "851f5f66-c12d-4242-aa64-12056f528f46" (UID: "851f5f66-c12d-4242-aa64-12056f528f46"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:27:53 crc kubenswrapper[4678]: I1124 11:27:53.374846 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klsbg\" (UniqueName: \"kubernetes.io/projected/851f5f66-c12d-4242-aa64-12056f528f46-kube-api-access-klsbg\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:53 crc kubenswrapper[4678]: I1124 11:27:53.374888 4678 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/851f5f66-c12d-4242-aa64-12056f528f46-util\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:53 crc kubenswrapper[4678]: I1124 11:27:53.374899 4678 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/851f5f66-c12d-4242-aa64-12056f528f46-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:27:53 crc kubenswrapper[4678]: I1124 11:27:53.947646 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" event={"ID":"851f5f66-c12d-4242-aa64-12056f528f46","Type":"ContainerDied","Data":"43f592146ffb7f5902d76ff56c63ec5a1cc3c8f38593c965a774c6c34b995685"} Nov 24 11:27:53 crc kubenswrapper[4678]: I1124 11:27:53.947722 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43f592146ffb7f5902d76ff56c63ec5a1cc3c8f38593c965a774c6c34b995685" Nov 24 11:27:53 crc kubenswrapper[4678]: I1124 11:27:53.947801 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.285932 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh"] Nov 24 11:27:55 crc kubenswrapper[4678]: E1124 11:27:55.286632 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851f5f66-c12d-4242-aa64-12056f528f46" containerName="util" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.286648 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="851f5f66-c12d-4242-aa64-12056f528f46" containerName="util" Nov 24 11:27:55 crc kubenswrapper[4678]: E1124 11:27:55.286685 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851f5f66-c12d-4242-aa64-12056f528f46" containerName="extract" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.286692 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="851f5f66-c12d-4242-aa64-12056f528f46" containerName="extract" Nov 24 11:27:55 crc kubenswrapper[4678]: E1124 11:27:55.286703 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851f5f66-c12d-4242-aa64-12056f528f46" containerName="pull" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.286709 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="851f5f66-c12d-4242-aa64-12056f528f46" containerName="pull" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.286865 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="851f5f66-c12d-4242-aa64-12056f528f46" containerName="extract" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.287792 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.291752 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.292054 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-r8rv2" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.292160 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.292896 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.293034 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.293234 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.301727 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh"] Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.411831 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkgkr\" (UniqueName: \"kubernetes.io/projected/77532de8-8fa2-4555-a740-5b2f22acc429-kube-api-access-hkgkr\") pod \"loki-operator-controller-manager-7b9848658c-p2tjh\" (UID: \"77532de8-8fa2-4555-a740-5b2f22acc429\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.411894 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77532de8-8fa2-4555-a740-5b2f22acc429-apiservice-cert\") pod \"loki-operator-controller-manager-7b9848658c-p2tjh\" (UID: \"77532de8-8fa2-4555-a740-5b2f22acc429\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.411922 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77532de8-8fa2-4555-a740-5b2f22acc429-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7b9848658c-p2tjh\" (UID: \"77532de8-8fa2-4555-a740-5b2f22acc429\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.411980 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77532de8-8fa2-4555-a740-5b2f22acc429-webhook-cert\") pod \"loki-operator-controller-manager-7b9848658c-p2tjh\" (UID: \"77532de8-8fa2-4555-a740-5b2f22acc429\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.412030 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/77532de8-8fa2-4555-a740-5b2f22acc429-manager-config\") pod \"loki-operator-controller-manager-7b9848658c-p2tjh\" (UID: \"77532de8-8fa2-4555-a740-5b2f22acc429\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.513078 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/77532de8-8fa2-4555-a740-5b2f22acc429-manager-config\") pod \"loki-operator-controller-manager-7b9848658c-p2tjh\" (UID: \"77532de8-8fa2-4555-a740-5b2f22acc429\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.513147 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkgkr\" (UniqueName: \"kubernetes.io/projected/77532de8-8fa2-4555-a740-5b2f22acc429-kube-api-access-hkgkr\") pod \"loki-operator-controller-manager-7b9848658c-p2tjh\" (UID: \"77532de8-8fa2-4555-a740-5b2f22acc429\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.513178 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77532de8-8fa2-4555-a740-5b2f22acc429-apiservice-cert\") pod \"loki-operator-controller-manager-7b9848658c-p2tjh\" (UID: \"77532de8-8fa2-4555-a740-5b2f22acc429\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.513199 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77532de8-8fa2-4555-a740-5b2f22acc429-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7b9848658c-p2tjh\" (UID: \"77532de8-8fa2-4555-a740-5b2f22acc429\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.513251 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77532de8-8fa2-4555-a740-5b2f22acc429-webhook-cert\") pod \"loki-operator-controller-manager-7b9848658c-p2tjh\" (UID: \"77532de8-8fa2-4555-a740-5b2f22acc429\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.514425 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/77532de8-8fa2-4555-a740-5b2f22acc429-manager-config\") pod \"loki-operator-controller-manager-7b9848658c-p2tjh\" (UID: \"77532de8-8fa2-4555-a740-5b2f22acc429\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.521201 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77532de8-8fa2-4555-a740-5b2f22acc429-apiservice-cert\") pod \"loki-operator-controller-manager-7b9848658c-p2tjh\" (UID: \"77532de8-8fa2-4555-a740-5b2f22acc429\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.525373 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77532de8-8fa2-4555-a740-5b2f22acc429-webhook-cert\") pod \"loki-operator-controller-manager-7b9848658c-p2tjh\" (UID: \"77532de8-8fa2-4555-a740-5b2f22acc429\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.525462 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77532de8-8fa2-4555-a740-5b2f22acc429-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7b9848658c-p2tjh\" (UID: \"77532de8-8fa2-4555-a740-5b2f22acc429\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.533441 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkgkr\" (UniqueName: \"kubernetes.io/projected/77532de8-8fa2-4555-a740-5b2f22acc429-kube-api-access-hkgkr\") pod \"loki-operator-controller-manager-7b9848658c-p2tjh\" (UID: \"77532de8-8fa2-4555-a740-5b2f22acc429\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.612774 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.877124 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh"] Nov 24 11:27:55 crc kubenswrapper[4678]: I1124 11:27:55.964702 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" event={"ID":"77532de8-8fa2-4555-a740-5b2f22acc429","Type":"ContainerStarted","Data":"128ec5cdff9e0928830cbd6e63f2639966ac13a0cdb8d799c8c18c4cf0899520"} Nov 24 11:28:00 crc kubenswrapper[4678]: I1124 11:28:00.299961 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:28:00 crc kubenswrapper[4678]: I1124 11:28:00.300446 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:28:02 crc kubenswrapper[4678]: I1124 11:28:02.013263 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" event={"ID":"77532de8-8fa2-4555-a740-5b2f22acc429","Type":"ContainerStarted","Data":"0d5a3bd3175cdbc2ea907e75a8c634d263267cf42ce3d6c128ab6efd3a2a2083"} Nov 24 11:28:03 crc kubenswrapper[4678]: I1124 11:28:03.976139 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-kz7kw"] Nov 24 11:28:03 crc kubenswrapper[4678]: I1124 11:28:03.977445 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-kz7kw" Nov 24 11:28:03 crc kubenswrapper[4678]: I1124 11:28:03.983190 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Nov 24 11:28:03 crc kubenswrapper[4678]: I1124 11:28:03.983493 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Nov 24 11:28:03 crc kubenswrapper[4678]: I1124 11:28:03.983659 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-w298h" Nov 24 11:28:04 crc kubenswrapper[4678]: I1124 11:28:04.008934 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-kz7kw"] Nov 24 11:28:04 crc kubenswrapper[4678]: I1124 11:28:04.094092 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qctlq\" (UniqueName: \"kubernetes.io/projected/d34e2349-d9d9-47e5-a6ea-cf3fd54efe8f-kube-api-access-qctlq\") pod \"cluster-logging-operator-ff9846bd-kz7kw\" (UID: \"d34e2349-d9d9-47e5-a6ea-cf3fd54efe8f\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-kz7kw" Nov 24 11:28:04 crc kubenswrapper[4678]: I1124 11:28:04.196151 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qctlq\" (UniqueName: \"kubernetes.io/projected/d34e2349-d9d9-47e5-a6ea-cf3fd54efe8f-kube-api-access-qctlq\") pod \"cluster-logging-operator-ff9846bd-kz7kw\" (UID: \"d34e2349-d9d9-47e5-a6ea-cf3fd54efe8f\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-kz7kw" Nov 24 11:28:04 crc kubenswrapper[4678]: I1124 11:28:04.224565 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qctlq\" (UniqueName: \"kubernetes.io/projected/d34e2349-d9d9-47e5-a6ea-cf3fd54efe8f-kube-api-access-qctlq\") pod \"cluster-logging-operator-ff9846bd-kz7kw\" (UID: \"d34e2349-d9d9-47e5-a6ea-cf3fd54efe8f\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-kz7kw" Nov 24 11:28:04 crc kubenswrapper[4678]: I1124 11:28:04.301547 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-kz7kw" Nov 24 11:28:04 crc kubenswrapper[4678]: I1124 11:28:04.650903 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-kz7kw"] Nov 24 11:28:05 crc kubenswrapper[4678]: I1124 11:28:05.053569 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-kz7kw" event={"ID":"d34e2349-d9d9-47e5-a6ea-cf3fd54efe8f","Type":"ContainerStarted","Data":"fabb397868e81988fb4f10324a765fa4e2df7460511cc42f7d458c79e3335fef"} Nov 24 11:28:10 crc kubenswrapper[4678]: I1124 11:28:10.107791 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" event={"ID":"77532de8-8fa2-4555-a740-5b2f22acc429","Type":"ContainerStarted","Data":"dad7fa16d84c06e1285ddd83dc0159ec36bda7cc4c55d7dac9212715a66009a7"} Nov 24 11:28:10 crc kubenswrapper[4678]: I1124 11:28:10.108904 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:28:10 crc kubenswrapper[4678]: I1124 11:28:10.111606 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" Nov 24 11:28:10 crc kubenswrapper[4678]: I1124 11:28:10.145151 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-7b9848658c-p2tjh" podStartSLOduration=1.856477017 podStartE2EDuration="15.145121526s" podCreationTimestamp="2025-11-24 11:27:55 +0000 UTC" firstStartedPulling="2025-11-24 11:27:55.885486812 +0000 UTC m=+686.816546451" lastFinishedPulling="2025-11-24 11:28:09.174131331 +0000 UTC m=+700.105190960" observedRunningTime="2025-11-24 11:28:10.129888482 +0000 UTC m=+701.060948131" watchObservedRunningTime="2025-11-24 11:28:10.145121526 +0000 UTC m=+701.076181165" Nov 24 11:28:13 crc kubenswrapper[4678]: I1124 11:28:13.131825 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-kz7kw" event={"ID":"d34e2349-d9d9-47e5-a6ea-cf3fd54efe8f","Type":"ContainerStarted","Data":"1f622bda5364baf04504444ceaa204e44328000876d8e18e5f7e60c43059c9af"} Nov 24 11:28:13 crc kubenswrapper[4678]: I1124 11:28:13.150873 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-ff9846bd-kz7kw" podStartSLOduration=1.889823598 podStartE2EDuration="10.150847422s" podCreationTimestamp="2025-11-24 11:28:03 +0000 UTC" firstStartedPulling="2025-11-24 11:28:04.671419783 +0000 UTC m=+695.602479422" lastFinishedPulling="2025-11-24 11:28:12.932443607 +0000 UTC m=+703.863503246" observedRunningTime="2025-11-24 11:28:13.149006633 +0000 UTC m=+704.080066272" watchObservedRunningTime="2025-11-24 11:28:13.150847422 +0000 UTC m=+704.081907071" Nov 24 11:28:18 crc kubenswrapper[4678]: I1124 11:28:18.122225 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Nov 24 11:28:18 crc kubenswrapper[4678]: I1124 11:28:18.123946 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 24 11:28:18 crc kubenswrapper[4678]: I1124 11:28:18.127235 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Nov 24 11:28:18 crc kubenswrapper[4678]: I1124 11:28:18.127328 4678 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-qsbgm" Nov 24 11:28:18 crc kubenswrapper[4678]: I1124 11:28:18.127769 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Nov 24 11:28:18 crc kubenswrapper[4678]: I1124 11:28:18.141897 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 24 11:28:18 crc kubenswrapper[4678]: I1124 11:28:18.278179 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4e35add4-ae72-452d-b754-dbaded6eb221\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e35add4-ae72-452d-b754-dbaded6eb221\") pod \"minio\" (UID: \"c5ae9151-e58f-4dc9-a19f-013ba5c69402\") " pod="minio-dev/minio" Nov 24 11:28:18 crc kubenswrapper[4678]: I1124 11:28:18.278293 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb6zz\" (UniqueName: \"kubernetes.io/projected/c5ae9151-e58f-4dc9-a19f-013ba5c69402-kube-api-access-cb6zz\") pod \"minio\" (UID: \"c5ae9151-e58f-4dc9-a19f-013ba5c69402\") " pod="minio-dev/minio" Nov 24 11:28:18 crc kubenswrapper[4678]: I1124 11:28:18.379405 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4e35add4-ae72-452d-b754-dbaded6eb221\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e35add4-ae72-452d-b754-dbaded6eb221\") pod \"minio\" (UID: \"c5ae9151-e58f-4dc9-a19f-013ba5c69402\") " pod="minio-dev/minio" Nov 24 11:28:18 crc kubenswrapper[4678]: I1124 11:28:18.379469 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb6zz\" (UniqueName: \"kubernetes.io/projected/c5ae9151-e58f-4dc9-a19f-013ba5c69402-kube-api-access-cb6zz\") pod \"minio\" (UID: \"c5ae9151-e58f-4dc9-a19f-013ba5c69402\") " pod="minio-dev/minio" Nov 24 11:28:18 crc kubenswrapper[4678]: I1124 11:28:18.384252 4678 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 11:28:18 crc kubenswrapper[4678]: I1124 11:28:18.384306 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4e35add4-ae72-452d-b754-dbaded6eb221\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e35add4-ae72-452d-b754-dbaded6eb221\") pod \"minio\" (UID: \"c5ae9151-e58f-4dc9-a19f-013ba5c69402\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/298dd9786c293d8ae45a621aa2e46208013135b8c8191d4c4c8bfe74204364eb/globalmount\"" pod="minio-dev/minio" Nov 24 11:28:18 crc kubenswrapper[4678]: I1124 11:28:18.429252 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb6zz\" (UniqueName: \"kubernetes.io/projected/c5ae9151-e58f-4dc9-a19f-013ba5c69402-kube-api-access-cb6zz\") pod \"minio\" (UID: \"c5ae9151-e58f-4dc9-a19f-013ba5c69402\") " pod="minio-dev/minio" Nov 24 11:28:18 crc kubenswrapper[4678]: I1124 11:28:18.442222 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4e35add4-ae72-452d-b754-dbaded6eb221\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e35add4-ae72-452d-b754-dbaded6eb221\") pod \"minio\" (UID: \"c5ae9151-e58f-4dc9-a19f-013ba5c69402\") " pod="minio-dev/minio" Nov 24 11:28:18 crc kubenswrapper[4678]: I1124 11:28:18.744539 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 24 11:28:19 crc kubenswrapper[4678]: I1124 11:28:19.237780 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 24 11:28:19 crc kubenswrapper[4678]: W1124 11:28:19.247809 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5ae9151_e58f_4dc9_a19f_013ba5c69402.slice/crio-75728918ee024c0ed533845f7e4045f1b88cf0844e22620c4e8d5c96f58a3ba1 WatchSource:0}: Error finding container 75728918ee024c0ed533845f7e4045f1b88cf0844e22620c4e8d5c96f58a3ba1: Status 404 returned error can't find the container with id 75728918ee024c0ed533845f7e4045f1b88cf0844e22620c4e8d5c96f58a3ba1 Nov 24 11:28:20 crc kubenswrapper[4678]: I1124 11:28:20.182870 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"c5ae9151-e58f-4dc9-a19f-013ba5c69402","Type":"ContainerStarted","Data":"75728918ee024c0ed533845f7e4045f1b88cf0844e22620c4e8d5c96f58a3ba1"} Nov 24 11:28:23 crc kubenswrapper[4678]: I1124 11:28:23.214867 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"c5ae9151-e58f-4dc9-a19f-013ba5c69402","Type":"ContainerStarted","Data":"ff35531ad9dc2c43e3bbd2158c48e5fcfed6d8a4c142ae87888556a4168c23d4"} Nov 24 11:28:23 crc kubenswrapper[4678]: I1124 11:28:23.244773 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.50323678 podStartE2EDuration="8.244729009s" podCreationTimestamp="2025-11-24 11:28:15 +0000 UTC" firstStartedPulling="2025-11-24 11:28:19.250862144 +0000 UTC m=+710.181921783" lastFinishedPulling="2025-11-24 11:28:22.992354353 +0000 UTC m=+713.923414012" observedRunningTime="2025-11-24 11:28:23.235830372 +0000 UTC m=+714.166890091" watchObservedRunningTime="2025-11-24 11:28:23.244729009 +0000 UTC m=+714.175788688" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.266846 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf"] Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.273832 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.293211 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-2mhh4" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.293439 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.293586 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.293749 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.298194 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.300035 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf"] Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.373915 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4833108-5c1f-4961-bb34-9bb438a1c4ef-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-jwzsf\" (UID: \"f4833108-5c1f-4961-bb34-9bb438a1c4ef\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.373987 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/f4833108-5c1f-4961-bb34-9bb438a1c4ef-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-jwzsf\" (UID: \"f4833108-5c1f-4961-bb34-9bb438a1c4ef\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.374143 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/f4833108-5c1f-4961-bb34-9bb438a1c4ef-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-jwzsf\" (UID: \"f4833108-5c1f-4961-bb34-9bb438a1c4ef\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.374256 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4833108-5c1f-4961-bb34-9bb438a1c4ef-config\") pod \"logging-loki-distributor-76cc67bf56-jwzsf\" (UID: \"f4833108-5c1f-4961-bb34-9bb438a1c4ef\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.374290 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv6rs\" (UniqueName: \"kubernetes.io/projected/f4833108-5c1f-4961-bb34-9bb438a1c4ef-kube-api-access-pv6rs\") pod \"logging-loki-distributor-76cc67bf56-jwzsf\" (UID: \"f4833108-5c1f-4961-bb34-9bb438a1c4ef\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.476496 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4833108-5c1f-4961-bb34-9bb438a1c4ef-config\") pod \"logging-loki-distributor-76cc67bf56-jwzsf\" (UID: \"f4833108-5c1f-4961-bb34-9bb438a1c4ef\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.476560 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv6rs\" (UniqueName: \"kubernetes.io/projected/f4833108-5c1f-4961-bb34-9bb438a1c4ef-kube-api-access-pv6rs\") pod \"logging-loki-distributor-76cc67bf56-jwzsf\" (UID: \"f4833108-5c1f-4961-bb34-9bb438a1c4ef\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.476608 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4833108-5c1f-4961-bb34-9bb438a1c4ef-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-jwzsf\" (UID: \"f4833108-5c1f-4961-bb34-9bb438a1c4ef\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.476642 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/f4833108-5c1f-4961-bb34-9bb438a1c4ef-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-jwzsf\" (UID: \"f4833108-5c1f-4961-bb34-9bb438a1c4ef\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.476729 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/f4833108-5c1f-4961-bb34-9bb438a1c4ef-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-jwzsf\" (UID: \"f4833108-5c1f-4961-bb34-9bb438a1c4ef\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.477900 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4833108-5c1f-4961-bb34-9bb438a1c4ef-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-jwzsf\" (UID: \"f4833108-5c1f-4961-bb34-9bb438a1c4ef\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.484196 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-dbwm8"] Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.485629 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.496656 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/f4833108-5c1f-4961-bb34-9bb438a1c4ef-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-jwzsf\" (UID: \"f4833108-5c1f-4961-bb34-9bb438a1c4ef\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.496742 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/f4833108-5c1f-4961-bb34-9bb438a1c4ef-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-jwzsf\" (UID: \"f4833108-5c1f-4961-bb34-9bb438a1c4ef\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.504377 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4833108-5c1f-4961-bb34-9bb438a1c4ef-config\") pod \"logging-loki-distributor-76cc67bf56-jwzsf\" (UID: \"f4833108-5c1f-4961-bb34-9bb438a1c4ef\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.509192 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.509531 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.509741 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.518306 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv6rs\" (UniqueName: \"kubernetes.io/projected/f4833108-5c1f-4961-bb34-9bb438a1c4ef-kube-api-access-pv6rs\") pod \"logging-loki-distributor-76cc67bf56-jwzsf\" (UID: \"f4833108-5c1f-4961-bb34-9bb438a1c4ef\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.519208 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-dbwm8"] Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.585837 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq"] Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.586749 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.591829 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.592014 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.601397 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq"] Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.605106 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.683340 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b85d2201-78d6-477e-a798-2096dc5b916a-config\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.683403 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b85d2201-78d6-477e-a798-2096dc5b916a-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.683431 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8mgp\" (UniqueName: \"kubernetes.io/projected/b85d2201-78d6-477e-a798-2096dc5b916a-kube-api-access-s8mgp\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.683453 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/b85d2201-78d6-477e-a798-2096dc5b916a-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.683502 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b85d2201-78d6-477e-a798-2096dc5b916a-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.683533 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/b85d2201-78d6-477e-a798-2096dc5b916a-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.704661 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd"] Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.706091 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.711768 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-cplb6" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.712047 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.712197 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.716022 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.716243 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.716392 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.729168 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5"] Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.730477 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.737999 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd"] Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.744185 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5"] Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.787455 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50b79b7-550a-4135-9a07-71ba28340eb6-config\") pod \"logging-loki-query-frontend-84558f7c9f-zw4kq\" (UID: \"e50b79b7-550a-4135-9a07-71ba28340eb6\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.788484 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p6z6\" (UniqueName: \"kubernetes.io/projected/e50b79b7-550a-4135-9a07-71ba28340eb6-kube-api-access-8p6z6\") pod \"logging-loki-query-frontend-84558f7c9f-zw4kq\" (UID: \"e50b79b7-550a-4135-9a07-71ba28340eb6\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.788572 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e50b79b7-550a-4135-9a07-71ba28340eb6-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-zw4kq\" (UID: \"e50b79b7-550a-4135-9a07-71ba28340eb6\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.788633 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b85d2201-78d6-477e-a798-2096dc5b916a-config\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.788689 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/e50b79b7-550a-4135-9a07-71ba28340eb6-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-zw4kq\" (UID: \"e50b79b7-550a-4135-9a07-71ba28340eb6\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.788725 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b85d2201-78d6-477e-a798-2096dc5b916a-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.788753 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8mgp\" (UniqueName: \"kubernetes.io/projected/b85d2201-78d6-477e-a798-2096dc5b916a-kube-api-access-s8mgp\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.788789 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/e50b79b7-550a-4135-9a07-71ba28340eb6-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-zw4kq\" (UID: \"e50b79b7-550a-4135-9a07-71ba28340eb6\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.788815 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/b85d2201-78d6-477e-a798-2096dc5b916a-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.788890 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b85d2201-78d6-477e-a798-2096dc5b916a-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.788931 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/b85d2201-78d6-477e-a798-2096dc5b916a-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.791384 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b85d2201-78d6-477e-a798-2096dc5b916a-config\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.792068 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b85d2201-78d6-477e-a798-2096dc5b916a-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.804336 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/b85d2201-78d6-477e-a798-2096dc5b916a-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.804350 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b85d2201-78d6-477e-a798-2096dc5b916a-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.804806 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/b85d2201-78d6-477e-a798-2096dc5b916a-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.816451 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8mgp\" (UniqueName: \"kubernetes.io/projected/b85d2201-78d6-477e-a798-2096dc5b916a-kube-api-access-s8mgp\") pod \"logging-loki-querier-5895d59bb8-dbwm8\" (UID: \"b85d2201-78d6-477e-a798-2096dc5b916a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.871136 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.892778 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-logging-loki-ca-bundle\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.892840 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b4a9171-61ec-4c11-ad33-cf613849ac75-logging-loki-ca-bundle\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.892873 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50b79b7-550a-4135-9a07-71ba28340eb6-config\") pod \"logging-loki-query-frontend-84558f7c9f-zw4kq\" (UID: \"e50b79b7-550a-4135-9a07-71ba28340eb6\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.892904 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-rbac\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.894192 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50b79b7-550a-4135-9a07-71ba28340eb6-config\") pod \"logging-loki-query-frontend-84558f7c9f-zw4kq\" (UID: \"e50b79b7-550a-4135-9a07-71ba28340eb6\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.894253 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b4a9171-61ec-4c11-ad33-cf613849ac75-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.894293 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.894317 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3b4a9171-61ec-4c11-ad33-cf613849ac75-tls-secret\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.894347 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-lokistack-gateway\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.894398 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p6z6\" (UniqueName: \"kubernetes.io/projected/e50b79b7-550a-4135-9a07-71ba28340eb6-kube-api-access-8p6z6\") pod \"logging-loki-query-frontend-84558f7c9f-zw4kq\" (UID: \"e50b79b7-550a-4135-9a07-71ba28340eb6\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.894509 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e50b79b7-550a-4135-9a07-71ba28340eb6-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-zw4kq\" (UID: \"e50b79b7-550a-4135-9a07-71ba28340eb6\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.894537 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3b4a9171-61ec-4c11-ad33-cf613849ac75-rbac\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.894608 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/e50b79b7-550a-4135-9a07-71ba28340eb6-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-zw4kq\" (UID: \"e50b79b7-550a-4135-9a07-71ba28340eb6\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.894646 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/e50b79b7-550a-4135-9a07-71ba28340eb6-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-zw4kq\" (UID: \"e50b79b7-550a-4135-9a07-71ba28340eb6\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.894683 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5fdb\" (UniqueName: \"kubernetes.io/projected/3b4a9171-61ec-4c11-ad33-cf613849ac75-kube-api-access-r5fdb\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.894724 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3b4a9171-61ec-4c11-ad33-cf613849ac75-lokistack-gateway\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.894781 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3b4a9171-61ec-4c11-ad33-cf613849ac75-tenants\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.894898 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3b4a9171-61ec-4c11-ad33-cf613849ac75-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.894954 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.894983 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpd2w\" (UniqueName: \"kubernetes.io/projected/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-kube-api-access-xpd2w\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.895013 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-tenants\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.895034 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-tls-secret\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.895819 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e50b79b7-550a-4135-9a07-71ba28340eb6-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-zw4kq\" (UID: \"e50b79b7-550a-4135-9a07-71ba28340eb6\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.900358 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/e50b79b7-550a-4135-9a07-71ba28340eb6-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-zw4kq\" (UID: \"e50b79b7-550a-4135-9a07-71ba28340eb6\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.916819 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/e50b79b7-550a-4135-9a07-71ba28340eb6-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-zw4kq\" (UID: \"e50b79b7-550a-4135-9a07-71ba28340eb6\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.923353 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p6z6\" (UniqueName: \"kubernetes.io/projected/e50b79b7-550a-4135-9a07-71ba28340eb6-kube-api-access-8p6z6\") pod \"logging-loki-query-frontend-84558f7c9f-zw4kq\" (UID: \"e50b79b7-550a-4135-9a07-71ba28340eb6\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.996360 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-tenants\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.996417 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-tls-secret\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.996456 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-logging-loki-ca-bundle\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.996483 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b4a9171-61ec-4c11-ad33-cf613849ac75-logging-loki-ca-bundle\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.996522 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-rbac\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.996547 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b4a9171-61ec-4c11-ad33-cf613849ac75-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.996571 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.996604 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3b4a9171-61ec-4c11-ad33-cf613849ac75-tls-secret\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.996633 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-lokistack-gateway\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.996710 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3b4a9171-61ec-4c11-ad33-cf613849ac75-rbac\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.996750 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5fdb\" (UniqueName: \"kubernetes.io/projected/3b4a9171-61ec-4c11-ad33-cf613849ac75-kube-api-access-r5fdb\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.996773 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3b4a9171-61ec-4c11-ad33-cf613849ac75-lokistack-gateway\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.996801 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3b4a9171-61ec-4c11-ad33-cf613849ac75-tenants\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.996830 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3b4a9171-61ec-4c11-ad33-cf613849ac75-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.996858 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.996878 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpd2w\" (UniqueName: \"kubernetes.io/projected/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-kube-api-access-xpd2w\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: E1124 11:28:27.997060 4678 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Nov 24 11:28:27 crc kubenswrapper[4678]: E1124 11:28:27.997146 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b4a9171-61ec-4c11-ad33-cf613849ac75-tls-secret podName:3b4a9171-61ec-4c11-ad33-cf613849ac75 nodeName:}" failed. No retries permitted until 2025-11-24 11:28:28.497123473 +0000 UTC m=+719.428183112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/3b4a9171-61ec-4c11-ad33-cf613849ac75-tls-secret") pod "logging-loki-gateway-88ddc8cf9-5hnj5" (UID: "3b4a9171-61ec-4c11-ad33-cf613849ac75") : secret "logging-loki-gateway-http" not found Nov 24 11:28:27 crc kubenswrapper[4678]: E1124 11:28:27.998080 4678 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Nov 24 11:28:27 crc kubenswrapper[4678]: E1124 11:28:27.998183 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-tls-secret podName:0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2 nodeName:}" failed. No retries permitted until 2025-11-24 11:28:28.498159741 +0000 UTC m=+719.429219380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-tls-secret") pod "logging-loki-gateway-88ddc8cf9-2ldpd" (UID: "0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2") : secret "logging-loki-gateway-http" not found Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.998753 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-lokistack-gateway\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.998894 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b4a9171-61ec-4c11-ad33-cf613849ac75-logging-loki-ca-bundle\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.999132 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-rbac\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:27 crc kubenswrapper[4678]: I1124 11:28:27.999535 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-logging-loki-ca-bundle\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.000115 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3b4a9171-61ec-4c11-ad33-cf613849ac75-rbac\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.001264 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b4a9171-61ec-4c11-ad33-cf613849ac75-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.001616 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3b4a9171-61ec-4c11-ad33-cf613849ac75-lokistack-gateway\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.002237 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.004286 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3b4a9171-61ec-4c11-ad33-cf613849ac75-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.007233 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3b4a9171-61ec-4c11-ad33-cf613849ac75-tenants\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.010190 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.011113 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-tenants\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.021021 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpd2w\" (UniqueName: \"kubernetes.io/projected/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-kube-api-access-xpd2w\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.023480 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5fdb\" (UniqueName: \"kubernetes.io/projected/3b4a9171-61ec-4c11-ad33-cf613849ac75-kube-api-access-r5fdb\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.127944 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf"] Nov 24 11:28:28 crc kubenswrapper[4678]: W1124 11:28:28.137019 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4833108_5c1f_4961_bb34_9bb438a1c4ef.slice/crio-27196351b4e0d667564f6b3f4513bdb3876e0c5dbf5d433e187862dddf817053 WatchSource:0}: Error finding container 27196351b4e0d667564f6b3f4513bdb3876e0c5dbf5d433e187862dddf817053: Status 404 returned error can't find the container with id 27196351b4e0d667564f6b3f4513bdb3876e0c5dbf5d433e187862dddf817053 Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.219715 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.260420 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" event={"ID":"f4833108-5c1f-4961-bb34-9bb438a1c4ef","Type":"ContainerStarted","Data":"27196351b4e0d667564f6b3f4513bdb3876e0c5dbf5d433e187862dddf817053"} Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.378417 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-dbwm8"] Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.469637 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.470887 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.473768 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.473940 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.475013 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.507517 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-tls-secret\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.507587 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3b4a9171-61ec-4c11-ad33-cf613849ac75-tls-secret\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.513983 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3b4a9171-61ec-4c11-ad33-cf613849ac75-tls-secret\") pod \"logging-loki-gateway-88ddc8cf9-5hnj5\" (UID: \"3b4a9171-61ec-4c11-ad33-cf613849ac75\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.529494 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2-tls-secret\") pod \"logging-loki-gateway-88ddc8cf9-2ldpd\" (UID: \"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2\") " pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.529532 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.530642 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.541195 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.541945 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.553460 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.613620 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6njdw\" (UniqueName: \"kubernetes.io/projected/bae3408d-f5fc-4bc0-b911-69de95e61536-kube-api-access-6njdw\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.613718 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bae3408d-f5fc-4bc0-b911-69de95e61536-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.613814 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f4e2f191-ca23-4cdf-bea3-52331d5ba5aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4e2f191-ca23-4cdf-bea3-52331d5ba5aa\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.613848 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/bae3408d-f5fc-4bc0-b911-69de95e61536-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.613879 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae3408d-f5fc-4bc0-b911-69de95e61536-config\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.613905 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bae3408d-f5fc-4bc0-b911-69de95e61536-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.613936 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/bae3408d-f5fc-4bc0-b911-69de95e61536-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.613985 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9a245653-16d0-4ae8-8950-f23b0dcabc87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a245653-16d0-4ae8-8950-f23b0dcabc87\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.681467 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.683113 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.690463 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.692991 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.696405 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.697264 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.697976 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.715615 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f4e2f191-ca23-4cdf-bea3-52331d5ba5aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4e2f191-ca23-4cdf-bea3-52331d5ba5aa\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.715711 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/a1a95c24-e0a9-4acb-a52c-7face078ba60-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.715755 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/bae3408d-f5fc-4bc0-b911-69de95e61536-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.715800 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae3408d-f5fc-4bc0-b911-69de95e61536-config\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.715857 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bae3408d-f5fc-4bc0-b911-69de95e61536-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.715876 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmlp7\" (UniqueName: \"kubernetes.io/projected/a1a95c24-e0a9-4acb-a52c-7face078ba60-kube-api-access-nmlp7\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.715896 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/bae3408d-f5fc-4bc0-b911-69de95e61536-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.715928 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9a245653-16d0-4ae8-8950-f23b0dcabc87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a245653-16d0-4ae8-8950-f23b0dcabc87\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.715955 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1a95c24-e0a9-4acb-a52c-7face078ba60-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.715975 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6njdw\" (UniqueName: \"kubernetes.io/projected/bae3408d-f5fc-4bc0-b911-69de95e61536-kube-api-access-6njdw\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.716010 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-19614bc4-55f4-45a6-97ef-ceff04f004ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19614bc4-55f4-45a6-97ef-ceff04f004ae\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.716048 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bae3408d-f5fc-4bc0-b911-69de95e61536-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.716086 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a1a95c24-e0a9-4acb-a52c-7face078ba60-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.716107 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/a1a95c24-e0a9-4acb-a52c-7face078ba60-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.716136 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a95c24-e0a9-4acb-a52c-7face078ba60-config\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.717749 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bae3408d-f5fc-4bc0-b911-69de95e61536-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.717982 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae3408d-f5fc-4bc0-b911-69de95e61536-config\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.722193 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/bae3408d-f5fc-4bc0-b911-69de95e61536-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.726974 4678 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.727023 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9a245653-16d0-4ae8-8950-f23b0dcabc87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a245653-16d0-4ae8-8950-f23b0dcabc87\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cb5ee69055ce5e5cfa7db23a447e37f354389609575d6f29db33397ab8039ee8/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.727178 4678 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.727240 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f4e2f191-ca23-4cdf-bea3-52331d5ba5aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4e2f191-ca23-4cdf-bea3-52331d5ba5aa\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9dd5d1cbf2bcef2baf31516d3b3890aad33e05e9ff1ef1d7e909adc91a376f86/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.728413 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bae3408d-f5fc-4bc0-b911-69de95e61536-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.730865 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/bae3408d-f5fc-4bc0-b911-69de95e61536-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.740442 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq"] Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.761614 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6njdw\" (UniqueName: \"kubernetes.io/projected/bae3408d-f5fc-4bc0-b911-69de95e61536-kube-api-access-6njdw\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.802421 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9a245653-16d0-4ae8-8950-f23b0dcabc87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a245653-16d0-4ae8-8950-f23b0dcabc87\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.836458 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/a1a95c24-e0a9-4acb-a52c-7face078ba60-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.836644 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmlp7\" (UniqueName: \"kubernetes.io/projected/a1a95c24-e0a9-4acb-a52c-7face078ba60-kube-api-access-nmlp7\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.839564 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1a95c24-e0a9-4acb-a52c-7face078ba60-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.839619 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-19614bc4-55f4-45a6-97ef-ceff04f004ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19614bc4-55f4-45a6-97ef-ceff04f004ae\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.837913 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f4e2f191-ca23-4cdf-bea3-52331d5ba5aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4e2f191-ca23-4cdf-bea3-52331d5ba5aa\") pod \"logging-loki-ingester-0\" (UID: \"bae3408d-f5fc-4bc0-b911-69de95e61536\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.839851 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a1a95c24-e0a9-4acb-a52c-7face078ba60-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.840066 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/a1a95c24-e0a9-4acb-a52c-7face078ba60-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.840217 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a95c24-e0a9-4acb-a52c-7face078ba60-config\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.841212 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1a95c24-e0a9-4acb-a52c-7face078ba60-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.841494 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a95c24-e0a9-4acb-a52c-7face078ba60-config\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.849297 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/a1a95c24-e0a9-4acb-a52c-7face078ba60-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.853546 4678 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.853591 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-19614bc4-55f4-45a6-97ef-ceff04f004ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19614bc4-55f4-45a6-97ef-ceff04f004ae\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c91d71f8d4466899dc59495f9e60dec95052045fb993dfafbea4e360890cf6eb/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.853943 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/a1a95c24-e0a9-4acb-a52c-7face078ba60-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.859517 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a1a95c24-e0a9-4acb-a52c-7face078ba60-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.860864 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmlp7\" (UniqueName: \"kubernetes.io/projected/a1a95c24-e0a9-4acb-a52c-7face078ba60-kube-api-access-nmlp7\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.879609 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.941798 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9qnj\" (UniqueName: \"kubernetes.io/projected/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-kube-api-access-j9qnj\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.941867 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.941926 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-578dea8a-3275-458c-b899-e5df987099fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-578dea8a-3275-458c-b899-e5df987099fe\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.941962 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.942000 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.942032 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:28 crc kubenswrapper[4678]: I1124 11:28:28.942094 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.023994 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-19614bc4-55f4-45a6-97ef-ceff04f004ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19614bc4-55f4-45a6-97ef-ceff04f004ae\") pod \"logging-loki-compactor-0\" (UID: \"a1a95c24-e0a9-4acb-a52c-7face078ba60\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.044031 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.044102 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.044129 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.044190 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.044228 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9qnj\" (UniqueName: \"kubernetes.io/projected/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-kube-api-access-j9qnj\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.044252 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.044285 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-578dea8a-3275-458c-b899-e5df987099fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-578dea8a-3275-458c-b899-e5df987099fe\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.045715 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.045785 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.048568 4678 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.048617 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-578dea8a-3275-458c-b899-e5df987099fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-578dea8a-3275-458c-b899-e5df987099fe\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/673c30a50a57baee9987f7e2b3676fe1099256d16b85c2ab58786b6a4083632e/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.048719 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.049399 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.050061 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.064265 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9qnj\" (UniqueName: \"kubernetes.io/projected/7ee7bb9f-9ca9-491b-820c-d6e359bb06ec-kube-api-access-j9qnj\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.095427 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-578dea8a-3275-458c-b899-e5df987099fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-578dea8a-3275-458c-b899-e5df987099fe\") pod \"logging-loki-index-gateway-0\" (UID: \"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.108095 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.204454 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.280476 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" event={"ID":"e50b79b7-550a-4135-9a07-71ba28340eb6","Type":"ContainerStarted","Data":"d98e0915cd9814adc3c20c838eb0a1be494945e9216bd1550f5b21d0a6b330f3"} Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.284276 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" event={"ID":"b85d2201-78d6-477e-a798-2096dc5b916a","Type":"ContainerStarted","Data":"274dadda75c930dd5068a1c80e4a7d238125d9aee5d517c98b790ed59f6b5a15"} Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.346706 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5"] Nov 24 11:28:29 crc kubenswrapper[4678]: W1124 11:28:29.357000 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b4a9171_61ec_4c11_ad33_cf613849ac75.slice/crio-f76be4c75a6e26a05163dfa37a298f3ebd9f131267c208d594326a407d7e69ba WatchSource:0}: Error finding container f76be4c75a6e26a05163dfa37a298f3ebd9f131267c208d594326a407d7e69ba: Status 404 returned error can't find the container with id f76be4c75a6e26a05163dfa37a298f3ebd9f131267c208d594326a407d7e69ba Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.433406 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd"] Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.448416 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 24 11:28:29 crc kubenswrapper[4678]: W1124 11:28:29.462412 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1a95c24_e0a9_4acb_a52c_7face078ba60.slice/crio-59a3089db796a7e229fb43bd726c2916e8330838512d7acb4d6dd4a2007fd7ad WatchSource:0}: Error finding container 59a3089db796a7e229fb43bd726c2916e8330838512d7acb4d6dd4a2007fd7ad: Status 404 returned error can't find the container with id 59a3089db796a7e229fb43bd726c2916e8330838512d7acb4d6dd4a2007fd7ad Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.487338 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 24 11:28:29 crc kubenswrapper[4678]: W1124 11:28:29.490574 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbae3408d_f5fc_4bc0_b911_69de95e61536.slice/crio-389998f46b05a48bfa4d9f8d5aa7233a8df5bdabd1bf8eb8f58218f51811f42f WatchSource:0}: Error finding container 389998f46b05a48bfa4d9f8d5aa7233a8df5bdabd1bf8eb8f58218f51811f42f: Status 404 returned error can't find the container with id 389998f46b05a48bfa4d9f8d5aa7233a8df5bdabd1bf8eb8f58218f51811f42f Nov 24 11:28:29 crc kubenswrapper[4678]: I1124 11:28:29.580280 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 24 11:28:29 crc kubenswrapper[4678]: W1124 11:28:29.587224 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ee7bb9f_9ca9_491b_820c_d6e359bb06ec.slice/crio-3188b55fe11e6677672956e02824315f4dfc1b82b84abc00cc833272feefcb86 WatchSource:0}: Error finding container 3188b55fe11e6677672956e02824315f4dfc1b82b84abc00cc833272feefcb86: Status 404 returned error can't find the container with id 3188b55fe11e6677672956e02824315f4dfc1b82b84abc00cc833272feefcb86 Nov 24 11:28:30 crc kubenswrapper[4678]: I1124 11:28:30.296217 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"a1a95c24-e0a9-4acb-a52c-7face078ba60","Type":"ContainerStarted","Data":"59a3089db796a7e229fb43bd726c2916e8330838512d7acb4d6dd4a2007fd7ad"} Nov 24 11:28:30 crc kubenswrapper[4678]: I1124 11:28:30.296915 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:28:30 crc kubenswrapper[4678]: I1124 11:28:30.297029 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:28:30 crc kubenswrapper[4678]: I1124 11:28:30.299877 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"bae3408d-f5fc-4bc0-b911-69de95e61536","Type":"ContainerStarted","Data":"389998f46b05a48bfa4d9f8d5aa7233a8df5bdabd1bf8eb8f58218f51811f42f"} Nov 24 11:28:30 crc kubenswrapper[4678]: I1124 11:28:30.302106 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec","Type":"ContainerStarted","Data":"3188b55fe11e6677672956e02824315f4dfc1b82b84abc00cc833272feefcb86"} Nov 24 11:28:30 crc kubenswrapper[4678]: I1124 11:28:30.303359 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" event={"ID":"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2","Type":"ContainerStarted","Data":"9443586bd9305d4b6aa661d49f4467e5c0bd534c3e4f6b65fc53fdf8b5f89ab7"} Nov 24 11:28:30 crc kubenswrapper[4678]: I1124 11:28:30.305372 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" event={"ID":"3b4a9171-61ec-4c11-ad33-cf613849ac75","Type":"ContainerStarted","Data":"f76be4c75a6e26a05163dfa37a298f3ebd9f131267c208d594326a407d7e69ba"} Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.340854 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"a1a95c24-e0a9-4acb-a52c-7face078ba60","Type":"ContainerStarted","Data":"c2dbf606a505cbfa1465aef44eb0d8549974e719c57e8487d285d55c2ea04d4d"} Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.341545 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.343892 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" event={"ID":"b85d2201-78d6-477e-a798-2096dc5b916a","Type":"ContainerStarted","Data":"24ac28570392a4f14d40d019e6f26b7729628c7e4456496ba5e1e19ddd756180"} Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.343995 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.354357 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"bae3408d-f5fc-4bc0-b911-69de95e61536","Type":"ContainerStarted","Data":"e28448d2c87e75660c12c78c06bca4db458a2c77338721bcef2c80f0d4a6b6a4"} Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.354756 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.366920 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" event={"ID":"f4833108-5c1f-4961-bb34-9bb438a1c4ef","Type":"ContainerStarted","Data":"26a1e0c12aca1162ef394d8d8d5bd17e017303683ff4c3b4b7a5d0916efb8f21"} Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.367554 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.373923 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"7ee7bb9f-9ca9-491b-820c-d6e359bb06ec","Type":"ContainerStarted","Data":"2d92c993dcd311f48772481feeda0fc23fc0b5e171f9fcb94f289154380a69f8"} Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.374841 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.382628 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" event={"ID":"e50b79b7-550a-4135-9a07-71ba28340eb6","Type":"ContainerStarted","Data":"1f45d749b753225e00fb32732bcfdb0e8d4b7136b29e81f5946e627957e44d17"} Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.382855 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.384943 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" event={"ID":"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2","Type":"ContainerStarted","Data":"5f5fb8a45d94d2a8b96bd590e35fcbf90515955542de6c09acc128a26a09205e"} Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.385895 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.213586501 podStartE2EDuration="6.385876647s" podCreationTimestamp="2025-11-24 11:28:27 +0000 UTC" firstStartedPulling="2025-11-24 11:28:29.465198823 +0000 UTC m=+720.396258462" lastFinishedPulling="2025-11-24 11:28:32.637488969 +0000 UTC m=+723.568548608" observedRunningTime="2025-11-24 11:28:33.377137624 +0000 UTC m=+724.308197263" watchObservedRunningTime="2025-11-24 11:28:33.385876647 +0000 UTC m=+724.316936286" Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.387413 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" event={"ID":"3b4a9171-61ec-4c11-ad33-cf613849ac75","Type":"ContainerStarted","Data":"004bbc8bf0bd25ab1c3f6fd0d0f7b47bb34014728c7bbb1e8e2b635309b612a3"} Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.405054 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" podStartSLOduration=2.192360575 podStartE2EDuration="6.405031658s" podCreationTimestamp="2025-11-24 11:28:27 +0000 UTC" firstStartedPulling="2025-11-24 11:28:28.393876189 +0000 UTC m=+719.324935828" lastFinishedPulling="2025-11-24 11:28:32.606547272 +0000 UTC m=+723.537606911" observedRunningTime="2025-11-24 11:28:33.403881718 +0000 UTC m=+724.334941357" watchObservedRunningTime="2025-11-24 11:28:33.405031658 +0000 UTC m=+724.336091297" Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.460100 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" podStartSLOduration=2.165607661 podStartE2EDuration="6.460082079s" podCreationTimestamp="2025-11-24 11:28:27 +0000 UTC" firstStartedPulling="2025-11-24 11:28:28.141357305 +0000 UTC m=+719.072416944" lastFinishedPulling="2025-11-24 11:28:32.435831723 +0000 UTC m=+723.366891362" observedRunningTime="2025-11-24 11:28:33.437183567 +0000 UTC m=+724.368243216" watchObservedRunningTime="2025-11-24 11:28:33.460082079 +0000 UTC m=+724.391141708" Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.465014 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.457548486 podStartE2EDuration="6.46500619s" podCreationTimestamp="2025-11-24 11:28:27 +0000 UTC" firstStartedPulling="2025-11-24 11:28:29.590593342 +0000 UTC m=+720.521652981" lastFinishedPulling="2025-11-24 11:28:32.598051046 +0000 UTC m=+723.529110685" observedRunningTime="2025-11-24 11:28:33.456835412 +0000 UTC m=+724.387895051" watchObservedRunningTime="2025-11-24 11:28:33.46500619 +0000 UTC m=+724.396065829" Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.480912 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.376961294 podStartE2EDuration="6.480840764s" podCreationTimestamp="2025-11-24 11:28:27 +0000 UTC" firstStartedPulling="2025-11-24 11:28:29.494134825 +0000 UTC m=+720.425194464" lastFinishedPulling="2025-11-24 11:28:32.598014295 +0000 UTC m=+723.529073934" observedRunningTime="2025-11-24 11:28:33.479388615 +0000 UTC m=+724.410448254" watchObservedRunningTime="2025-11-24 11:28:33.480840764 +0000 UTC m=+724.411900403" Nov 24 11:28:33 crc kubenswrapper[4678]: I1124 11:28:33.501948 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" podStartSLOduration=2.656216775 podStartE2EDuration="6.501927387s" podCreationTimestamp="2025-11-24 11:28:27 +0000 UTC" firstStartedPulling="2025-11-24 11:28:28.758345674 +0000 UTC m=+719.689405313" lastFinishedPulling="2025-11-24 11:28:32.604056286 +0000 UTC m=+723.535115925" observedRunningTime="2025-11-24 11:28:33.49980822 +0000 UTC m=+724.430867859" watchObservedRunningTime="2025-11-24 11:28:33.501927387 +0000 UTC m=+724.432987016" Nov 24 11:28:36 crc kubenswrapper[4678]: I1124 11:28:36.416000 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" event={"ID":"3b4a9171-61ec-4c11-ad33-cf613849ac75","Type":"ContainerStarted","Data":"afbed7971a2655eeac55197b6842d705091fc93cb66b186963f0b846c853db80"} Nov 24 11:28:36 crc kubenswrapper[4678]: I1124 11:28:36.416502 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:36 crc kubenswrapper[4678]: I1124 11:28:36.416521 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:36 crc kubenswrapper[4678]: I1124 11:28:36.420124 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" event={"ID":"0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2","Type":"ContainerStarted","Data":"1b0f88523c7f53e08d20ea5a87d8bf295331fe16a656124bda68303887353f0d"} Nov 24 11:28:36 crc kubenswrapper[4678]: I1124 11:28:36.420882 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:36 crc kubenswrapper[4678]: I1124 11:28:36.420964 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:36 crc kubenswrapper[4678]: I1124 11:28:36.429900 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:36 crc kubenswrapper[4678]: I1124 11:28:36.435117 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:36 crc kubenswrapper[4678]: I1124 11:28:36.436204 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" Nov 24 11:28:36 crc kubenswrapper[4678]: I1124 11:28:36.437652 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" Nov 24 11:28:36 crc kubenswrapper[4678]: I1124 11:28:36.452798 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-5hnj5" podStartSLOduration=3.3585508219999998 podStartE2EDuration="9.452773238s" podCreationTimestamp="2025-11-24 11:28:27 +0000 UTC" firstStartedPulling="2025-11-24 11:28:29.362596332 +0000 UTC m=+720.293655971" lastFinishedPulling="2025-11-24 11:28:35.456818758 +0000 UTC m=+726.387878387" observedRunningTime="2025-11-24 11:28:36.446794479 +0000 UTC m=+727.377854118" watchObservedRunningTime="2025-11-24 11:28:36.452773238 +0000 UTC m=+727.383832887" Nov 24 11:28:36 crc kubenswrapper[4678]: I1124 11:28:36.513422 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-88ddc8cf9-2ldpd" podStartSLOduration=3.476095821 podStartE2EDuration="9.513391587s" podCreationTimestamp="2025-11-24 11:28:27 +0000 UTC" firstStartedPulling="2025-11-24 11:28:29.451038414 +0000 UTC m=+720.382098053" lastFinishedPulling="2025-11-24 11:28:35.48833418 +0000 UTC m=+726.419393819" observedRunningTime="2025-11-24 11:28:36.50712518 +0000 UTC m=+727.438184859" watchObservedRunningTime="2025-11-24 11:28:36.513391587 +0000 UTC m=+727.444451246" Nov 24 11:28:47 crc kubenswrapper[4678]: I1124 11:28:47.612774 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-76cc67bf56-jwzsf" Nov 24 11:28:47 crc kubenswrapper[4678]: I1124 11:28:47.879837 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5895d59bb8-dbwm8" Nov 24 11:28:48 crc kubenswrapper[4678]: I1124 11:28:48.231228 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zw4kq" Nov 24 11:28:48 crc kubenswrapper[4678]: I1124 11:28:48.891119 4678 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 24 11:28:48 crc kubenswrapper[4678]: I1124 11:28:48.893438 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="bae3408d-f5fc-4bc0-b911-69de95e61536" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 11:28:49 crc kubenswrapper[4678]: I1124 11:28:49.120628 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 11:28:49 crc kubenswrapper[4678]: I1124 11:28:49.218712 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Nov 24 11:28:58 crc kubenswrapper[4678]: I1124 11:28:58.914994 4678 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 24 11:28:58 crc kubenswrapper[4678]: I1124 11:28:58.916603 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="bae3408d-f5fc-4bc0-b911-69de95e61536" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 11:29:00 crc kubenswrapper[4678]: I1124 11:29:00.296405 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:29:00 crc kubenswrapper[4678]: I1124 11:29:00.296485 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:29:00 crc kubenswrapper[4678]: I1124 11:29:00.296553 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:29:00 crc kubenswrapper[4678]: I1124 11:29:00.297657 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2bfe74ad72b1070a6c7e462d710c234790fcd2a6fff50a06b17d2f1671decd08"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:29:00 crc kubenswrapper[4678]: I1124 11:29:00.297796 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://2bfe74ad72b1070a6c7e462d710c234790fcd2a6fff50a06b17d2f1671decd08" gracePeriod=600 Nov 24 11:29:00 crc kubenswrapper[4678]: I1124 11:29:00.670387 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="2bfe74ad72b1070a6c7e462d710c234790fcd2a6fff50a06b17d2f1671decd08" exitCode=0 Nov 24 11:29:00 crc kubenswrapper[4678]: I1124 11:29:00.670612 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"2bfe74ad72b1070a6c7e462d710c234790fcd2a6fff50a06b17d2f1671decd08"} Nov 24 11:29:00 crc kubenswrapper[4678]: I1124 11:29:00.671017 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"1197580eb03eaddc7b9dc08dbab8ba6891f416c80d33f4fc3fc03e3113ad80b4"} Nov 24 11:29:00 crc kubenswrapper[4678]: I1124 11:29:00.671050 4678 scope.go:117] "RemoveContainer" containerID="538be58fbebd66fe558f9e6e8bc6084171acfd8da3f2cb10d27be45e829cefaa" Nov 24 11:29:06 crc kubenswrapper[4678]: I1124 11:29:06.815065 4678 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 11:29:08 crc kubenswrapper[4678]: I1124 11:29:08.889445 4678 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 24 11:29:08 crc kubenswrapper[4678]: I1124 11:29:08.889549 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="bae3408d-f5fc-4bc0-b911-69de95e61536" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 11:29:18 crc kubenswrapper[4678]: I1124 11:29:18.888205 4678 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 24 11:29:18 crc kubenswrapper[4678]: I1124 11:29:18.888555 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="bae3408d-f5fc-4bc0-b911-69de95e61536" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 11:29:28 crc kubenswrapper[4678]: I1124 11:29:28.889467 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.233393 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-t6tbd"] Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.273179 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.279036 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.279724 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.280173 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-s9t4g" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.284059 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.284558 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.286521 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-t6tbd"] Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.291362 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.348164 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-t6tbd"] Nov 24 11:29:48 crc kubenswrapper[4678]: E1124 11:29:48.348954 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-nvc2h metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-nvc2h metrics sa-token tmp trusted-ca]: context canceled" pod="openshift-logging/collector-t6tbd" podUID="de84ef29-3f8f-461e-aa86-1e186d3948db" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.379173 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/de84ef29-3f8f-461e-aa86-1e186d3948db-tmp\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.379237 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-collector-syslog-receiver\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.379280 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-entrypoint\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.379320 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-collector-token\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.379405 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-trusted-ca\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.379522 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-config-openshift-service-cacrt\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.379548 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvc2h\" (UniqueName: \"kubernetes.io/projected/de84ef29-3f8f-461e-aa86-1e186d3948db-kube-api-access-nvc2h\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.379566 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-config\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.379589 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/de84ef29-3f8f-461e-aa86-1e186d3948db-datadir\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.379606 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-metrics\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.379638 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/de84ef29-3f8f-461e-aa86-1e186d3948db-sa-token\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.481341 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-collector-token\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.481709 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-trusted-ca\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.481873 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-config-openshift-service-cacrt\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.481976 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvc2h\" (UniqueName: \"kubernetes.io/projected/de84ef29-3f8f-461e-aa86-1e186d3948db-kube-api-access-nvc2h\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.482054 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-config\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.482177 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/de84ef29-3f8f-461e-aa86-1e186d3948db-datadir\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.482316 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/de84ef29-3f8f-461e-aa86-1e186d3948db-datadir\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.482525 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-metrics\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.482704 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/de84ef29-3f8f-461e-aa86-1e186d3948db-sa-token\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.483178 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/de84ef29-3f8f-461e-aa86-1e186d3948db-tmp\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.483273 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-trusted-ca\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.483046 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-config-openshift-service-cacrt\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.483126 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-config\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.483452 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-collector-syslog-receiver\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.483969 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-entrypoint\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.485000 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-entrypoint\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.488226 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-collector-syslog-receiver\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.491308 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-metrics\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.492049 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/de84ef29-3f8f-461e-aa86-1e186d3948db-tmp\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.501509 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-collector-token\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.505210 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/de84ef29-3f8f-461e-aa86-1e186d3948db-sa-token\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:48 crc kubenswrapper[4678]: I1124 11:29:48.505452 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvc2h\" (UniqueName: \"kubernetes.io/projected/de84ef29-3f8f-461e-aa86-1e186d3948db-kube-api-access-nvc2h\") pod \"collector-t6tbd\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " pod="openshift-logging/collector-t6tbd" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.158185 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-t6tbd" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.171900 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-t6tbd" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.197114 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/de84ef29-3f8f-461e-aa86-1e186d3948db-datadir\") pod \"de84ef29-3f8f-461e-aa86-1e186d3948db\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.197204 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-entrypoint\") pod \"de84ef29-3f8f-461e-aa86-1e186d3948db\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.197298 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-trusted-ca\") pod \"de84ef29-3f8f-461e-aa86-1e186d3948db\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.197326 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/de84ef29-3f8f-461e-aa86-1e186d3948db-sa-token\") pod \"de84ef29-3f8f-461e-aa86-1e186d3948db\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.197351 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/de84ef29-3f8f-461e-aa86-1e186d3948db-tmp\") pod \"de84ef29-3f8f-461e-aa86-1e186d3948db\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.197390 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-metrics\") pod \"de84ef29-3f8f-461e-aa86-1e186d3948db\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.197444 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-config-openshift-service-cacrt\") pod \"de84ef29-3f8f-461e-aa86-1e186d3948db\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.197478 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-collector-token\") pod \"de84ef29-3f8f-461e-aa86-1e186d3948db\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.197525 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-config\") pod \"de84ef29-3f8f-461e-aa86-1e186d3948db\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.197573 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvc2h\" (UniqueName: \"kubernetes.io/projected/de84ef29-3f8f-461e-aa86-1e186d3948db-kube-api-access-nvc2h\") pod \"de84ef29-3f8f-461e-aa86-1e186d3948db\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.197613 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-collector-syslog-receiver\") pod \"de84ef29-3f8f-461e-aa86-1e186d3948db\" (UID: \"de84ef29-3f8f-461e-aa86-1e186d3948db\") " Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.199738 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "de84ef29-3f8f-461e-aa86-1e186d3948db" (UID: "de84ef29-3f8f-461e-aa86-1e186d3948db"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.199873 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "de84ef29-3f8f-461e-aa86-1e186d3948db" (UID: "de84ef29-3f8f-461e-aa86-1e186d3948db"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.199958 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de84ef29-3f8f-461e-aa86-1e186d3948db-datadir" (OuterVolumeSpecName: "datadir") pod "de84ef29-3f8f-461e-aa86-1e186d3948db" (UID: "de84ef29-3f8f-461e-aa86-1e186d3948db"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.199997 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "de84ef29-3f8f-461e-aa86-1e186d3948db" (UID: "de84ef29-3f8f-461e-aa86-1e186d3948db"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.200113 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-config" (OuterVolumeSpecName: "config") pod "de84ef29-3f8f-461e-aa86-1e186d3948db" (UID: "de84ef29-3f8f-461e-aa86-1e186d3948db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.207478 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-collector-token" (OuterVolumeSpecName: "collector-token") pod "de84ef29-3f8f-461e-aa86-1e186d3948db" (UID: "de84ef29-3f8f-461e-aa86-1e186d3948db"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.214936 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "de84ef29-3f8f-461e-aa86-1e186d3948db" (UID: "de84ef29-3f8f-461e-aa86-1e186d3948db"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.219547 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de84ef29-3f8f-461e-aa86-1e186d3948db-tmp" (OuterVolumeSpecName: "tmp") pod "de84ef29-3f8f-461e-aa86-1e186d3948db" (UID: "de84ef29-3f8f-461e-aa86-1e186d3948db"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.219891 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de84ef29-3f8f-461e-aa86-1e186d3948db-sa-token" (OuterVolumeSpecName: "sa-token") pod "de84ef29-3f8f-461e-aa86-1e186d3948db" (UID: "de84ef29-3f8f-461e-aa86-1e186d3948db"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.223838 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de84ef29-3f8f-461e-aa86-1e186d3948db-kube-api-access-nvc2h" (OuterVolumeSpecName: "kube-api-access-nvc2h") pod "de84ef29-3f8f-461e-aa86-1e186d3948db" (UID: "de84ef29-3f8f-461e-aa86-1e186d3948db"). InnerVolumeSpecName "kube-api-access-nvc2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.223903 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-metrics" (OuterVolumeSpecName: "metrics") pod "de84ef29-3f8f-461e-aa86-1e186d3948db" (UID: "de84ef29-3f8f-461e-aa86-1e186d3948db"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.300868 4678 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/de84ef29-3f8f-461e-aa86-1e186d3948db-tmp\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.300926 4678 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.300944 4678 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.300958 4678 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-collector-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.300975 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.300988 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvc2h\" (UniqueName: \"kubernetes.io/projected/de84ef29-3f8f-461e-aa86-1e186d3948db-kube-api-access-nvc2h\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.301003 4678 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/de84ef29-3f8f-461e-aa86-1e186d3948db-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.301015 4678 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/de84ef29-3f8f-461e-aa86-1e186d3948db-datadir\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.301026 4678 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-entrypoint\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.301036 4678 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de84ef29-3f8f-461e-aa86-1e186d3948db-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:49 crc kubenswrapper[4678]: I1124 11:29:49.301048 4678 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/de84ef29-3f8f-461e-aa86-1e186d3948db-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.164284 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-t6tbd" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.220753 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-t6tbd"] Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.226520 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-t6tbd"] Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.237370 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-srddw"] Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.239053 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.245843 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.246109 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.246977 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-s9t4g" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.247499 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.247639 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.249462 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.252153 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-srddw"] Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.317355 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4mnq\" (UniqueName: \"kubernetes.io/projected/06c7953f-f0d2-4db1-b53e-633539ce1c56-kube-api-access-v4mnq\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.317406 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/06c7953f-f0d2-4db1-b53e-633539ce1c56-entrypoint\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.317447 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/06c7953f-f0d2-4db1-b53e-633539ce1c56-config-openshift-service-cacrt\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.317477 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/06c7953f-f0d2-4db1-b53e-633539ce1c56-metrics\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.317492 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/06c7953f-f0d2-4db1-b53e-633539ce1c56-sa-token\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.317512 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/06c7953f-f0d2-4db1-b53e-633539ce1c56-collector-syslog-receiver\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.317536 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/06c7953f-f0d2-4db1-b53e-633539ce1c56-collector-token\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.317553 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06c7953f-f0d2-4db1-b53e-633539ce1c56-trusted-ca\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.317595 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06c7953f-f0d2-4db1-b53e-633539ce1c56-config\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.317612 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/06c7953f-f0d2-4db1-b53e-633539ce1c56-datadir\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.317651 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06c7953f-f0d2-4db1-b53e-633539ce1c56-tmp\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.420081 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/06c7953f-f0d2-4db1-b53e-633539ce1c56-config-openshift-service-cacrt\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.420216 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/06c7953f-f0d2-4db1-b53e-633539ce1c56-metrics\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.420242 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/06c7953f-f0d2-4db1-b53e-633539ce1c56-sa-token\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.420297 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/06c7953f-f0d2-4db1-b53e-633539ce1c56-collector-syslog-receiver\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.420334 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/06c7953f-f0d2-4db1-b53e-633539ce1c56-collector-token\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.420375 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06c7953f-f0d2-4db1-b53e-633539ce1c56-trusted-ca\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.420448 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/06c7953f-f0d2-4db1-b53e-633539ce1c56-datadir\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.420466 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06c7953f-f0d2-4db1-b53e-633539ce1c56-config\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.420528 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06c7953f-f0d2-4db1-b53e-633539ce1c56-tmp\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.420575 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4mnq\" (UniqueName: \"kubernetes.io/projected/06c7953f-f0d2-4db1-b53e-633539ce1c56-kube-api-access-v4mnq\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.420588 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/06c7953f-f0d2-4db1-b53e-633539ce1c56-datadir\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.420621 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/06c7953f-f0d2-4db1-b53e-633539ce1c56-entrypoint\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.421618 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06c7953f-f0d2-4db1-b53e-633539ce1c56-trusted-ca\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.421819 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/06c7953f-f0d2-4db1-b53e-633539ce1c56-entrypoint\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.422125 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/06c7953f-f0d2-4db1-b53e-633539ce1c56-config-openshift-service-cacrt\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.422426 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06c7953f-f0d2-4db1-b53e-633539ce1c56-config\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.426308 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06c7953f-f0d2-4db1-b53e-633539ce1c56-tmp\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.426872 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/06c7953f-f0d2-4db1-b53e-633539ce1c56-metrics\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.426905 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/06c7953f-f0d2-4db1-b53e-633539ce1c56-collector-token\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.428315 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/06c7953f-f0d2-4db1-b53e-633539ce1c56-collector-syslog-receiver\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.438866 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4mnq\" (UniqueName: \"kubernetes.io/projected/06c7953f-f0d2-4db1-b53e-633539ce1c56-kube-api-access-v4mnq\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.446816 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/06c7953f-f0d2-4db1-b53e-633539ce1c56-sa-token\") pod \"collector-srddw\" (UID: \"06c7953f-f0d2-4db1-b53e-633539ce1c56\") " pod="openshift-logging/collector-srddw" Nov 24 11:29:50 crc kubenswrapper[4678]: I1124 11:29:50.560968 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-srddw" Nov 24 11:29:51 crc kubenswrapper[4678]: I1124 11:29:51.051793 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-srddw"] Nov 24 11:29:51 crc kubenswrapper[4678]: I1124 11:29:51.176066 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-srddw" event={"ID":"06c7953f-f0d2-4db1-b53e-633539ce1c56","Type":"ContainerStarted","Data":"e252368e91d8328abc54613ff12e2f19e36e0c2e430f05917a67e12ec852780a"} Nov 24 11:29:51 crc kubenswrapper[4678]: I1124 11:29:51.907785 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de84ef29-3f8f-461e-aa86-1e186d3948db" path="/var/lib/kubelet/pods/de84ef29-3f8f-461e-aa86-1e186d3948db/volumes" Nov 24 11:29:52 crc kubenswrapper[4678]: I1124 11:29:52.597329 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s7jq5"] Nov 24 11:29:52 crc kubenswrapper[4678]: I1124 11:29:52.599100 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:29:52 crc kubenswrapper[4678]: I1124 11:29:52.605032 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s7jq5"] Nov 24 11:29:52 crc kubenswrapper[4678]: I1124 11:29:52.683772 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e85db410-7b39-432b-b894-b39519a7a15c-catalog-content\") pod \"community-operators-s7jq5\" (UID: \"e85db410-7b39-432b-b894-b39519a7a15c\") " pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:29:52 crc kubenswrapper[4678]: I1124 11:29:52.683876 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e85db410-7b39-432b-b894-b39519a7a15c-utilities\") pod \"community-operators-s7jq5\" (UID: \"e85db410-7b39-432b-b894-b39519a7a15c\") " pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:29:52 crc kubenswrapper[4678]: I1124 11:29:52.684026 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7xvp\" (UniqueName: \"kubernetes.io/projected/e85db410-7b39-432b-b894-b39519a7a15c-kube-api-access-t7xvp\") pod \"community-operators-s7jq5\" (UID: \"e85db410-7b39-432b-b894-b39519a7a15c\") " pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:29:52 crc kubenswrapper[4678]: I1124 11:29:52.785413 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7xvp\" (UniqueName: \"kubernetes.io/projected/e85db410-7b39-432b-b894-b39519a7a15c-kube-api-access-t7xvp\") pod \"community-operators-s7jq5\" (UID: \"e85db410-7b39-432b-b894-b39519a7a15c\") " pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:29:52 crc kubenswrapper[4678]: I1124 11:29:52.785524 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e85db410-7b39-432b-b894-b39519a7a15c-catalog-content\") pod \"community-operators-s7jq5\" (UID: \"e85db410-7b39-432b-b894-b39519a7a15c\") " pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:29:52 crc kubenswrapper[4678]: I1124 11:29:52.785604 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e85db410-7b39-432b-b894-b39519a7a15c-utilities\") pod \"community-operators-s7jq5\" (UID: \"e85db410-7b39-432b-b894-b39519a7a15c\") " pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:29:52 crc kubenswrapper[4678]: I1124 11:29:52.786141 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e85db410-7b39-432b-b894-b39519a7a15c-catalog-content\") pod \"community-operators-s7jq5\" (UID: \"e85db410-7b39-432b-b894-b39519a7a15c\") " pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:29:52 crc kubenswrapper[4678]: I1124 11:29:52.786231 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e85db410-7b39-432b-b894-b39519a7a15c-utilities\") pod \"community-operators-s7jq5\" (UID: \"e85db410-7b39-432b-b894-b39519a7a15c\") " pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:29:52 crc kubenswrapper[4678]: I1124 11:29:52.810947 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7xvp\" (UniqueName: \"kubernetes.io/projected/e85db410-7b39-432b-b894-b39519a7a15c-kube-api-access-t7xvp\") pod \"community-operators-s7jq5\" (UID: \"e85db410-7b39-432b-b894-b39519a7a15c\") " pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:29:52 crc kubenswrapper[4678]: I1124 11:29:52.937107 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:29:53 crc kubenswrapper[4678]: I1124 11:29:53.486244 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s7jq5"] Nov 24 11:29:53 crc kubenswrapper[4678]: W1124 11:29:53.494606 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode85db410_7b39_432b_b894_b39519a7a15c.slice/crio-73db58df12f0801022dfa91aef94a02b81e9aae8e03bcc5c84872cf73ba93350 WatchSource:0}: Error finding container 73db58df12f0801022dfa91aef94a02b81e9aae8e03bcc5c84872cf73ba93350: Status 404 returned error can't find the container with id 73db58df12f0801022dfa91aef94a02b81e9aae8e03bcc5c84872cf73ba93350 Nov 24 11:29:54 crc kubenswrapper[4678]: I1124 11:29:54.213138 4678 generic.go:334] "Generic (PLEG): container finished" podID="e85db410-7b39-432b-b894-b39519a7a15c" containerID="41b61b086eff4fb1d0cda547223184fab4e49f88ddf2603ee93acba38af6d753" exitCode=0 Nov 24 11:29:54 crc kubenswrapper[4678]: I1124 11:29:54.213193 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7jq5" event={"ID":"e85db410-7b39-432b-b894-b39519a7a15c","Type":"ContainerDied","Data":"41b61b086eff4fb1d0cda547223184fab4e49f88ddf2603ee93acba38af6d753"} Nov 24 11:29:54 crc kubenswrapper[4678]: I1124 11:29:54.213225 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7jq5" event={"ID":"e85db410-7b39-432b-b894-b39519a7a15c","Type":"ContainerStarted","Data":"73db58df12f0801022dfa91aef94a02b81e9aae8e03bcc5c84872cf73ba93350"} Nov 24 11:29:59 crc kubenswrapper[4678]: I1124 11:29:59.257356 4678 generic.go:334] "Generic (PLEG): container finished" podID="e85db410-7b39-432b-b894-b39519a7a15c" containerID="4d42ed5cec6395ae1f6b1c5d9ae4fd55481e171f1cd4e5b257c7c4ed00776675" exitCode=0 Nov 24 11:29:59 crc kubenswrapper[4678]: I1124 11:29:59.257460 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7jq5" event={"ID":"e85db410-7b39-432b-b894-b39519a7a15c","Type":"ContainerDied","Data":"4d42ed5cec6395ae1f6b1c5d9ae4fd55481e171f1cd4e5b257c7c4ed00776675"} Nov 24 11:29:59 crc kubenswrapper[4678]: I1124 11:29:59.260720 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-srddw" event={"ID":"06c7953f-f0d2-4db1-b53e-633539ce1c56","Type":"ContainerStarted","Data":"c1ee40c6bec23b5d368aeb3da5aaccbe30ae86e49e71bc5d458b0e9999fa8d2f"} Nov 24 11:29:59 crc kubenswrapper[4678]: I1124 11:29:59.318858 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-srddw" podStartSLOduration=1.728875612 podStartE2EDuration="9.318839346s" podCreationTimestamp="2025-11-24 11:29:50 +0000 UTC" firstStartedPulling="2025-11-24 11:29:51.072250944 +0000 UTC m=+802.003310593" lastFinishedPulling="2025-11-24 11:29:58.662214668 +0000 UTC m=+809.593274327" observedRunningTime="2025-11-24 11:29:59.314549671 +0000 UTC m=+810.245609340" watchObservedRunningTime="2025-11-24 11:29:59.318839346 +0000 UTC m=+810.249898985" Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.140076 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb"] Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.141487 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.145194 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.146590 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.155211 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb"] Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.205488 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12469164-9579-47b7-8b32-2cf4fd1cb806-config-volume\") pod \"collect-profiles-29399730-xppbb\" (UID: \"12469164-9579-47b7-8b32-2cf4fd1cb806\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.205554 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12469164-9579-47b7-8b32-2cf4fd1cb806-secret-volume\") pod \"collect-profiles-29399730-xppbb\" (UID: \"12469164-9579-47b7-8b32-2cf4fd1cb806\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.205588 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zvlm\" (UniqueName: \"kubernetes.io/projected/12469164-9579-47b7-8b32-2cf4fd1cb806-kube-api-access-7zvlm\") pod \"collect-profiles-29399730-xppbb\" (UID: \"12469164-9579-47b7-8b32-2cf4fd1cb806\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.271861 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7jq5" event={"ID":"e85db410-7b39-432b-b894-b39519a7a15c","Type":"ContainerStarted","Data":"cd0f6e5135d5e38d1fabadf83c78953c2932a506a78f9019e902d6ef538e48c4"} Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.301525 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s7jq5" podStartSLOduration=2.845504891 podStartE2EDuration="8.301499431s" podCreationTimestamp="2025-11-24 11:29:52 +0000 UTC" firstStartedPulling="2025-11-24 11:29:54.214840117 +0000 UTC m=+805.145899756" lastFinishedPulling="2025-11-24 11:29:59.670834637 +0000 UTC m=+810.601894296" observedRunningTime="2025-11-24 11:30:00.296151008 +0000 UTC m=+811.227210647" watchObservedRunningTime="2025-11-24 11:30:00.301499431 +0000 UTC m=+811.232559060" Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.308032 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12469164-9579-47b7-8b32-2cf4fd1cb806-config-volume\") pod \"collect-profiles-29399730-xppbb\" (UID: \"12469164-9579-47b7-8b32-2cf4fd1cb806\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.308521 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12469164-9579-47b7-8b32-2cf4fd1cb806-secret-volume\") pod \"collect-profiles-29399730-xppbb\" (UID: \"12469164-9579-47b7-8b32-2cf4fd1cb806\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.308630 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zvlm\" (UniqueName: \"kubernetes.io/projected/12469164-9579-47b7-8b32-2cf4fd1cb806-kube-api-access-7zvlm\") pod \"collect-profiles-29399730-xppbb\" (UID: \"12469164-9579-47b7-8b32-2cf4fd1cb806\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.309195 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12469164-9579-47b7-8b32-2cf4fd1cb806-config-volume\") pod \"collect-profiles-29399730-xppbb\" (UID: \"12469164-9579-47b7-8b32-2cf4fd1cb806\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.318452 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12469164-9579-47b7-8b32-2cf4fd1cb806-secret-volume\") pod \"collect-profiles-29399730-xppbb\" (UID: \"12469164-9579-47b7-8b32-2cf4fd1cb806\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.331044 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zvlm\" (UniqueName: \"kubernetes.io/projected/12469164-9579-47b7-8b32-2cf4fd1cb806-kube-api-access-7zvlm\") pod \"collect-profiles-29399730-xppbb\" (UID: \"12469164-9579-47b7-8b32-2cf4fd1cb806\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.459602 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" Nov 24 11:30:00 crc kubenswrapper[4678]: I1124 11:30:00.961033 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb"] Nov 24 11:30:01 crc kubenswrapper[4678]: I1124 11:30:01.281396 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" event={"ID":"12469164-9579-47b7-8b32-2cf4fd1cb806","Type":"ContainerStarted","Data":"97950105bad1d54bdda021339de36b3cc48a460a1c3bc09ae1a1c75662e2f740"} Nov 24 11:30:01 crc kubenswrapper[4678]: I1124 11:30:01.281494 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" event={"ID":"12469164-9579-47b7-8b32-2cf4fd1cb806","Type":"ContainerStarted","Data":"37b3922dac5f377fcd0c4b2e6fecf65365bea4ea530bfc9e3a9fee0fdf1ce5f8"} Nov 24 11:30:01 crc kubenswrapper[4678]: I1124 11:30:01.299143 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" podStartSLOduration=1.299119345 podStartE2EDuration="1.299119345s" podCreationTimestamp="2025-11-24 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:30:01.297852472 +0000 UTC m=+812.228912111" watchObservedRunningTime="2025-11-24 11:30:01.299119345 +0000 UTC m=+812.230178984" Nov 24 11:30:02 crc kubenswrapper[4678]: I1124 11:30:02.290609 4678 generic.go:334] "Generic (PLEG): container finished" podID="12469164-9579-47b7-8b32-2cf4fd1cb806" containerID="97950105bad1d54bdda021339de36b3cc48a460a1c3bc09ae1a1c75662e2f740" exitCode=0 Nov 24 11:30:02 crc kubenswrapper[4678]: I1124 11:30:02.290700 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" event={"ID":"12469164-9579-47b7-8b32-2cf4fd1cb806","Type":"ContainerDied","Data":"97950105bad1d54bdda021339de36b3cc48a460a1c3bc09ae1a1c75662e2f740"} Nov 24 11:30:02 crc kubenswrapper[4678]: I1124 11:30:02.938605 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:30:02 crc kubenswrapper[4678]: I1124 11:30:02.938738 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:30:02 crc kubenswrapper[4678]: I1124 11:30:02.986202 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:30:03 crc kubenswrapper[4678]: I1124 11:30:03.626481 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" Nov 24 11:30:03 crc kubenswrapper[4678]: I1124 11:30:03.775511 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zvlm\" (UniqueName: \"kubernetes.io/projected/12469164-9579-47b7-8b32-2cf4fd1cb806-kube-api-access-7zvlm\") pod \"12469164-9579-47b7-8b32-2cf4fd1cb806\" (UID: \"12469164-9579-47b7-8b32-2cf4fd1cb806\") " Nov 24 11:30:03 crc kubenswrapper[4678]: I1124 11:30:03.775678 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12469164-9579-47b7-8b32-2cf4fd1cb806-secret-volume\") pod \"12469164-9579-47b7-8b32-2cf4fd1cb806\" (UID: \"12469164-9579-47b7-8b32-2cf4fd1cb806\") " Nov 24 11:30:03 crc kubenswrapper[4678]: I1124 11:30:03.775953 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12469164-9579-47b7-8b32-2cf4fd1cb806-config-volume\") pod \"12469164-9579-47b7-8b32-2cf4fd1cb806\" (UID: \"12469164-9579-47b7-8b32-2cf4fd1cb806\") " Nov 24 11:30:03 crc kubenswrapper[4678]: I1124 11:30:03.777405 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12469164-9579-47b7-8b32-2cf4fd1cb806-config-volume" (OuterVolumeSpecName: "config-volume") pod "12469164-9579-47b7-8b32-2cf4fd1cb806" (UID: "12469164-9579-47b7-8b32-2cf4fd1cb806"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:30:03 crc kubenswrapper[4678]: I1124 11:30:03.782303 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12469164-9579-47b7-8b32-2cf4fd1cb806-kube-api-access-7zvlm" (OuterVolumeSpecName: "kube-api-access-7zvlm") pod "12469164-9579-47b7-8b32-2cf4fd1cb806" (UID: "12469164-9579-47b7-8b32-2cf4fd1cb806"). InnerVolumeSpecName "kube-api-access-7zvlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:03 crc kubenswrapper[4678]: I1124 11:30:03.782431 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12469164-9579-47b7-8b32-2cf4fd1cb806-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "12469164-9579-47b7-8b32-2cf4fd1cb806" (UID: "12469164-9579-47b7-8b32-2cf4fd1cb806"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:30:03 crc kubenswrapper[4678]: I1124 11:30:03.879129 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zvlm\" (UniqueName: \"kubernetes.io/projected/12469164-9579-47b7-8b32-2cf4fd1cb806-kube-api-access-7zvlm\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:03 crc kubenswrapper[4678]: I1124 11:30:03.879192 4678 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12469164-9579-47b7-8b32-2cf4fd1cb806-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:03 crc kubenswrapper[4678]: I1124 11:30:03.879212 4678 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12469164-9579-47b7-8b32-2cf4fd1cb806-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:04 crc kubenswrapper[4678]: I1124 11:30:04.322707 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" event={"ID":"12469164-9579-47b7-8b32-2cf4fd1cb806","Type":"ContainerDied","Data":"37b3922dac5f377fcd0c4b2e6fecf65365bea4ea530bfc9e3a9fee0fdf1ce5f8"} Nov 24 11:30:04 crc kubenswrapper[4678]: I1124 11:30:04.323541 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37b3922dac5f377fcd0c4b2e6fecf65365bea4ea530bfc9e3a9fee0fdf1ce5f8" Nov 24 11:30:04 crc kubenswrapper[4678]: I1124 11:30:04.322739 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb" Nov 24 11:30:04 crc kubenswrapper[4678]: I1124 11:30:04.375788 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:30:04 crc kubenswrapper[4678]: I1124 11:30:04.420289 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s7jq5"] Nov 24 11:30:06 crc kubenswrapper[4678]: I1124 11:30:06.337409 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s7jq5" podUID="e85db410-7b39-432b-b894-b39519a7a15c" containerName="registry-server" containerID="cri-o://cd0f6e5135d5e38d1fabadf83c78953c2932a506a78f9019e902d6ef538e48c4" gracePeriod=2 Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.239007 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.345161 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e85db410-7b39-432b-b894-b39519a7a15c-utilities\") pod \"e85db410-7b39-432b-b894-b39519a7a15c\" (UID: \"e85db410-7b39-432b-b894-b39519a7a15c\") " Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.345350 4678 generic.go:334] "Generic (PLEG): container finished" podID="e85db410-7b39-432b-b894-b39519a7a15c" containerID="cd0f6e5135d5e38d1fabadf83c78953c2932a506a78f9019e902d6ef538e48c4" exitCode=0 Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.345448 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e85db410-7b39-432b-b894-b39519a7a15c-catalog-content\") pod \"e85db410-7b39-432b-b894-b39519a7a15c\" (UID: \"e85db410-7b39-432b-b894-b39519a7a15c\") " Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.345407 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7jq5" event={"ID":"e85db410-7b39-432b-b894-b39519a7a15c","Type":"ContainerDied","Data":"cd0f6e5135d5e38d1fabadf83c78953c2932a506a78f9019e902d6ef538e48c4"} Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.345498 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7xvp\" (UniqueName: \"kubernetes.io/projected/e85db410-7b39-432b-b894-b39519a7a15c-kube-api-access-t7xvp\") pod \"e85db410-7b39-432b-b894-b39519a7a15c\" (UID: \"e85db410-7b39-432b-b894-b39519a7a15c\") " Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.345543 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7jq5" event={"ID":"e85db410-7b39-432b-b894-b39519a7a15c","Type":"ContainerDied","Data":"73db58df12f0801022dfa91aef94a02b81e9aae8e03bcc5c84872cf73ba93350"} Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.345582 4678 scope.go:117] "RemoveContainer" containerID="cd0f6e5135d5e38d1fabadf83c78953c2932a506a78f9019e902d6ef538e48c4" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.345449 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7jq5" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.350753 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e85db410-7b39-432b-b894-b39519a7a15c-kube-api-access-t7xvp" (OuterVolumeSpecName: "kube-api-access-t7xvp") pod "e85db410-7b39-432b-b894-b39519a7a15c" (UID: "e85db410-7b39-432b-b894-b39519a7a15c"). InnerVolumeSpecName "kube-api-access-t7xvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.351130 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e85db410-7b39-432b-b894-b39519a7a15c-utilities" (OuterVolumeSpecName: "utilities") pod "e85db410-7b39-432b-b894-b39519a7a15c" (UID: "e85db410-7b39-432b-b894-b39519a7a15c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.393211 4678 scope.go:117] "RemoveContainer" containerID="4d42ed5cec6395ae1f6b1c5d9ae4fd55481e171f1cd4e5b257c7c4ed00776675" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.409436 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e85db410-7b39-432b-b894-b39519a7a15c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e85db410-7b39-432b-b894-b39519a7a15c" (UID: "e85db410-7b39-432b-b894-b39519a7a15c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.411061 4678 scope.go:117] "RemoveContainer" containerID="41b61b086eff4fb1d0cda547223184fab4e49f88ddf2603ee93acba38af6d753" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.444342 4678 scope.go:117] "RemoveContainer" containerID="cd0f6e5135d5e38d1fabadf83c78953c2932a506a78f9019e902d6ef538e48c4" Nov 24 11:30:07 crc kubenswrapper[4678]: E1124 11:30:07.445471 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0f6e5135d5e38d1fabadf83c78953c2932a506a78f9019e902d6ef538e48c4\": container with ID starting with cd0f6e5135d5e38d1fabadf83c78953c2932a506a78f9019e902d6ef538e48c4 not found: ID does not exist" containerID="cd0f6e5135d5e38d1fabadf83c78953c2932a506a78f9019e902d6ef538e48c4" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.445525 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0f6e5135d5e38d1fabadf83c78953c2932a506a78f9019e902d6ef538e48c4"} err="failed to get container status \"cd0f6e5135d5e38d1fabadf83c78953c2932a506a78f9019e902d6ef538e48c4\": rpc error: code = NotFound desc = could not find container \"cd0f6e5135d5e38d1fabadf83c78953c2932a506a78f9019e902d6ef538e48c4\": container with ID starting with cd0f6e5135d5e38d1fabadf83c78953c2932a506a78f9019e902d6ef538e48c4 not found: ID does not exist" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.445561 4678 scope.go:117] "RemoveContainer" containerID="4d42ed5cec6395ae1f6b1c5d9ae4fd55481e171f1cd4e5b257c7c4ed00776675" Nov 24 11:30:07 crc kubenswrapper[4678]: E1124 11:30:07.446005 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d42ed5cec6395ae1f6b1c5d9ae4fd55481e171f1cd4e5b257c7c4ed00776675\": container with ID starting with 4d42ed5cec6395ae1f6b1c5d9ae4fd55481e171f1cd4e5b257c7c4ed00776675 not found: ID does not exist" containerID="4d42ed5cec6395ae1f6b1c5d9ae4fd55481e171f1cd4e5b257c7c4ed00776675" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.446029 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d42ed5cec6395ae1f6b1c5d9ae4fd55481e171f1cd4e5b257c7c4ed00776675"} err="failed to get container status \"4d42ed5cec6395ae1f6b1c5d9ae4fd55481e171f1cd4e5b257c7c4ed00776675\": rpc error: code = NotFound desc = could not find container \"4d42ed5cec6395ae1f6b1c5d9ae4fd55481e171f1cd4e5b257c7c4ed00776675\": container with ID starting with 4d42ed5cec6395ae1f6b1c5d9ae4fd55481e171f1cd4e5b257c7c4ed00776675 not found: ID does not exist" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.446045 4678 scope.go:117] "RemoveContainer" containerID="41b61b086eff4fb1d0cda547223184fab4e49f88ddf2603ee93acba38af6d753" Nov 24 11:30:07 crc kubenswrapper[4678]: E1124 11:30:07.446351 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41b61b086eff4fb1d0cda547223184fab4e49f88ddf2603ee93acba38af6d753\": container with ID starting with 41b61b086eff4fb1d0cda547223184fab4e49f88ddf2603ee93acba38af6d753 not found: ID does not exist" containerID="41b61b086eff4fb1d0cda547223184fab4e49f88ddf2603ee93acba38af6d753" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.446372 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41b61b086eff4fb1d0cda547223184fab4e49f88ddf2603ee93acba38af6d753"} err="failed to get container status \"41b61b086eff4fb1d0cda547223184fab4e49f88ddf2603ee93acba38af6d753\": rpc error: code = NotFound desc = could not find container \"41b61b086eff4fb1d0cda547223184fab4e49f88ddf2603ee93acba38af6d753\": container with ID starting with 41b61b086eff4fb1d0cda547223184fab4e49f88ddf2603ee93acba38af6d753 not found: ID does not exist" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.446935 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e85db410-7b39-432b-b894-b39519a7a15c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.446964 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7xvp\" (UniqueName: \"kubernetes.io/projected/e85db410-7b39-432b-b894-b39519a7a15c-kube-api-access-t7xvp\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.446977 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e85db410-7b39-432b-b894-b39519a7a15c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.721615 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s7jq5"] Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.728560 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s7jq5"] Nov 24 11:30:07 crc kubenswrapper[4678]: I1124 11:30:07.904834 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e85db410-7b39-432b-b894-b39519a7a15c" path="/var/lib/kubelet/pods/e85db410-7b39-432b-b894-b39519a7a15c/volumes" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.484425 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6jqjh"] Nov 24 11:30:08 crc kubenswrapper[4678]: E1124 11:30:08.484991 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12469164-9579-47b7-8b32-2cf4fd1cb806" containerName="collect-profiles" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.485016 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="12469164-9579-47b7-8b32-2cf4fd1cb806" containerName="collect-profiles" Nov 24 11:30:08 crc kubenswrapper[4678]: E1124 11:30:08.485048 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85db410-7b39-432b-b894-b39519a7a15c" containerName="extract-utilities" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.485060 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85db410-7b39-432b-b894-b39519a7a15c" containerName="extract-utilities" Nov 24 11:30:08 crc kubenswrapper[4678]: E1124 11:30:08.485087 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85db410-7b39-432b-b894-b39519a7a15c" containerName="extract-content" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.485100 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85db410-7b39-432b-b894-b39519a7a15c" containerName="extract-content" Nov 24 11:30:08 crc kubenswrapper[4678]: E1124 11:30:08.485123 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85db410-7b39-432b-b894-b39519a7a15c" containerName="registry-server" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.485134 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85db410-7b39-432b-b894-b39519a7a15c" containerName="registry-server" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.485368 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85db410-7b39-432b-b894-b39519a7a15c" containerName="registry-server" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.485402 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="12469164-9579-47b7-8b32-2cf4fd1cb806" containerName="collect-profiles" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.490777 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.503137 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6jqjh"] Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.566052 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0876a61c-8136-4da5-9683-0d0ae61de9b7-catalog-content\") pod \"certified-operators-6jqjh\" (UID: \"0876a61c-8136-4da5-9683-0d0ae61de9b7\") " pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.566401 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0876a61c-8136-4da5-9683-0d0ae61de9b7-utilities\") pod \"certified-operators-6jqjh\" (UID: \"0876a61c-8136-4da5-9683-0d0ae61de9b7\") " pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.566558 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mxf2\" (UniqueName: \"kubernetes.io/projected/0876a61c-8136-4da5-9683-0d0ae61de9b7-kube-api-access-8mxf2\") pod \"certified-operators-6jqjh\" (UID: \"0876a61c-8136-4da5-9683-0d0ae61de9b7\") " pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.668383 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0876a61c-8136-4da5-9683-0d0ae61de9b7-catalog-content\") pod \"certified-operators-6jqjh\" (UID: \"0876a61c-8136-4da5-9683-0d0ae61de9b7\") " pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.668450 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0876a61c-8136-4da5-9683-0d0ae61de9b7-utilities\") pod \"certified-operators-6jqjh\" (UID: \"0876a61c-8136-4da5-9683-0d0ae61de9b7\") " pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.668485 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mxf2\" (UniqueName: \"kubernetes.io/projected/0876a61c-8136-4da5-9683-0d0ae61de9b7-kube-api-access-8mxf2\") pod \"certified-operators-6jqjh\" (UID: \"0876a61c-8136-4da5-9683-0d0ae61de9b7\") " pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.669240 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0876a61c-8136-4da5-9683-0d0ae61de9b7-catalog-content\") pod \"certified-operators-6jqjh\" (UID: \"0876a61c-8136-4da5-9683-0d0ae61de9b7\") " pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.669354 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0876a61c-8136-4da5-9683-0d0ae61de9b7-utilities\") pod \"certified-operators-6jqjh\" (UID: \"0876a61c-8136-4da5-9683-0d0ae61de9b7\") " pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.689532 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mxf2\" (UniqueName: \"kubernetes.io/projected/0876a61c-8136-4da5-9683-0d0ae61de9b7-kube-api-access-8mxf2\") pod \"certified-operators-6jqjh\" (UID: \"0876a61c-8136-4da5-9683-0d0ae61de9b7\") " pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:08 crc kubenswrapper[4678]: I1124 11:30:08.809987 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:09 crc kubenswrapper[4678]: I1124 11:30:09.303013 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6jqjh"] Nov 24 11:30:09 crc kubenswrapper[4678]: I1124 11:30:09.362517 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jqjh" event={"ID":"0876a61c-8136-4da5-9683-0d0ae61de9b7","Type":"ContainerStarted","Data":"2198b02f6ca7b05d2c2d746292c910f33f55e0aabbd96fb7f142012eaf7d5745"} Nov 24 11:30:10 crc kubenswrapper[4678]: I1124 11:30:10.379283 4678 generic.go:334] "Generic (PLEG): container finished" podID="0876a61c-8136-4da5-9683-0d0ae61de9b7" containerID="7c6b92790847c1259dd20b37c19a8c595a5c599ecc8f6ee75cbb74c375e1d04c" exitCode=0 Nov 24 11:30:10 crc kubenswrapper[4678]: I1124 11:30:10.379410 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jqjh" event={"ID":"0876a61c-8136-4da5-9683-0d0ae61de9b7","Type":"ContainerDied","Data":"7c6b92790847c1259dd20b37c19a8c595a5c599ecc8f6ee75cbb74c375e1d04c"} Nov 24 11:30:11 crc kubenswrapper[4678]: I1124 11:30:11.394735 4678 generic.go:334] "Generic (PLEG): container finished" podID="0876a61c-8136-4da5-9683-0d0ae61de9b7" containerID="8159ec20b58193409693735d9b428e2949748b6908745b1b36e11d9d7b4e21e3" exitCode=0 Nov 24 11:30:11 crc kubenswrapper[4678]: I1124 11:30:11.394807 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jqjh" event={"ID":"0876a61c-8136-4da5-9683-0d0ae61de9b7","Type":"ContainerDied","Data":"8159ec20b58193409693735d9b428e2949748b6908745b1b36e11d9d7b4e21e3"} Nov 24 11:30:12 crc kubenswrapper[4678]: I1124 11:30:12.405072 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jqjh" event={"ID":"0876a61c-8136-4da5-9683-0d0ae61de9b7","Type":"ContainerStarted","Data":"b5222b771fa1bc94acf865342c9bd906f6ca66d2b52aab1f612e4610a8cba345"} Nov 24 11:30:12 crc kubenswrapper[4678]: I1124 11:30:12.435560 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6jqjh" podStartSLOduration=3.041431325 podStartE2EDuration="4.435531289s" podCreationTimestamp="2025-11-24 11:30:08 +0000 UTC" firstStartedPulling="2025-11-24 11:30:10.381772437 +0000 UTC m=+821.312832076" lastFinishedPulling="2025-11-24 11:30:11.775872401 +0000 UTC m=+822.706932040" observedRunningTime="2025-11-24 11:30:12.429369115 +0000 UTC m=+823.360428764" watchObservedRunningTime="2025-11-24 11:30:12.435531289 +0000 UTC m=+823.366590938" Nov 24 11:30:14 crc kubenswrapper[4678]: I1124 11:30:14.686919 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f4957"] Nov 24 11:30:14 crc kubenswrapper[4678]: I1124 11:30:14.691963 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:14 crc kubenswrapper[4678]: I1124 11:30:14.697172 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4957"] Nov 24 11:30:14 crc kubenswrapper[4678]: I1124 11:30:14.782315 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a423872-9340-4317-826f-3a2fda4a205c-utilities\") pod \"redhat-marketplace-f4957\" (UID: \"7a423872-9340-4317-826f-3a2fda4a205c\") " pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:14 crc kubenswrapper[4678]: I1124 11:30:14.782428 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a423872-9340-4317-826f-3a2fda4a205c-catalog-content\") pod \"redhat-marketplace-f4957\" (UID: \"7a423872-9340-4317-826f-3a2fda4a205c\") " pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:14 crc kubenswrapper[4678]: I1124 11:30:14.782543 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcs4z\" (UniqueName: \"kubernetes.io/projected/7a423872-9340-4317-826f-3a2fda4a205c-kube-api-access-bcs4z\") pod \"redhat-marketplace-f4957\" (UID: \"7a423872-9340-4317-826f-3a2fda4a205c\") " pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:14 crc kubenswrapper[4678]: I1124 11:30:14.883811 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a423872-9340-4317-826f-3a2fda4a205c-utilities\") pod \"redhat-marketplace-f4957\" (UID: \"7a423872-9340-4317-826f-3a2fda4a205c\") " pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:14 crc kubenswrapper[4678]: I1124 11:30:14.883908 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a423872-9340-4317-826f-3a2fda4a205c-catalog-content\") pod \"redhat-marketplace-f4957\" (UID: \"7a423872-9340-4317-826f-3a2fda4a205c\") " pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:14 crc kubenswrapper[4678]: I1124 11:30:14.884009 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcs4z\" (UniqueName: \"kubernetes.io/projected/7a423872-9340-4317-826f-3a2fda4a205c-kube-api-access-bcs4z\") pod \"redhat-marketplace-f4957\" (UID: \"7a423872-9340-4317-826f-3a2fda4a205c\") " pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:14 crc kubenswrapper[4678]: I1124 11:30:14.884890 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a423872-9340-4317-826f-3a2fda4a205c-utilities\") pod \"redhat-marketplace-f4957\" (UID: \"7a423872-9340-4317-826f-3a2fda4a205c\") " pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:14 crc kubenswrapper[4678]: I1124 11:30:14.885113 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a423872-9340-4317-826f-3a2fda4a205c-catalog-content\") pod \"redhat-marketplace-f4957\" (UID: \"7a423872-9340-4317-826f-3a2fda4a205c\") " pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:14 crc kubenswrapper[4678]: I1124 11:30:14.906537 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcs4z\" (UniqueName: \"kubernetes.io/projected/7a423872-9340-4317-826f-3a2fda4a205c-kube-api-access-bcs4z\") pod \"redhat-marketplace-f4957\" (UID: \"7a423872-9340-4317-826f-3a2fda4a205c\") " pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:15 crc kubenswrapper[4678]: I1124 11:30:15.012458 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:15 crc kubenswrapper[4678]: I1124 11:30:15.484436 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4957"] Nov 24 11:30:16 crc kubenswrapper[4678]: I1124 11:30:16.438463 4678 generic.go:334] "Generic (PLEG): container finished" podID="7a423872-9340-4317-826f-3a2fda4a205c" containerID="e94282c586c8f7ff562779d9d5e939b76cdbf8fd1a0a53420af2e3060ac746f8" exitCode=0 Nov 24 11:30:16 crc kubenswrapper[4678]: I1124 11:30:16.438599 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4957" event={"ID":"7a423872-9340-4317-826f-3a2fda4a205c","Type":"ContainerDied","Data":"e94282c586c8f7ff562779d9d5e939b76cdbf8fd1a0a53420af2e3060ac746f8"} Nov 24 11:30:16 crc kubenswrapper[4678]: I1124 11:30:16.439294 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4957" event={"ID":"7a423872-9340-4317-826f-3a2fda4a205c","Type":"ContainerStarted","Data":"2af3378b40db9c876a3a8ed092bd9c544e885aef8d03761bb457bffc89c11ac9"} Nov 24 11:30:17 crc kubenswrapper[4678]: I1124 11:30:17.453539 4678 generic.go:334] "Generic (PLEG): container finished" podID="7a423872-9340-4317-826f-3a2fda4a205c" containerID="12e0b6772c210bc992ad42ea1a95c46406040cee4a7b950abe84c21ba1a6760d" exitCode=0 Nov 24 11:30:17 crc kubenswrapper[4678]: I1124 11:30:17.453607 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4957" event={"ID":"7a423872-9340-4317-826f-3a2fda4a205c","Type":"ContainerDied","Data":"12e0b6772c210bc992ad42ea1a95c46406040cee4a7b950abe84c21ba1a6760d"} Nov 24 11:30:18 crc kubenswrapper[4678]: I1124 11:30:18.481993 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4957" event={"ID":"7a423872-9340-4317-826f-3a2fda4a205c","Type":"ContainerStarted","Data":"c1af8d5d8f7b66bb984ccd0ffab0300f6f776594de3e0466a70f7118d1730952"} Nov 24 11:30:18 crc kubenswrapper[4678]: I1124 11:30:18.512183 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f4957" podStartSLOduration=3.095915679 podStartE2EDuration="4.512164905s" podCreationTimestamp="2025-11-24 11:30:14 +0000 UTC" firstStartedPulling="2025-11-24 11:30:16.441696356 +0000 UTC m=+827.372755995" lastFinishedPulling="2025-11-24 11:30:17.857945582 +0000 UTC m=+828.789005221" observedRunningTime="2025-11-24 11:30:18.508434586 +0000 UTC m=+829.439494215" watchObservedRunningTime="2025-11-24 11:30:18.512164905 +0000 UTC m=+829.443224544" Nov 24 11:30:18 crc kubenswrapper[4678]: I1124 11:30:18.810478 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:18 crc kubenswrapper[4678]: I1124 11:30:18.810927 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:18 crc kubenswrapper[4678]: I1124 11:30:18.864320 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:19 crc kubenswrapper[4678]: I1124 11:30:19.536019 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:20 crc kubenswrapper[4678]: I1124 11:30:20.883769 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6jqjh"] Nov 24 11:30:21 crc kubenswrapper[4678]: I1124 11:30:21.508021 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6jqjh" podUID="0876a61c-8136-4da5-9683-0d0ae61de9b7" containerName="registry-server" containerID="cri-o://b5222b771fa1bc94acf865342c9bd906f6ca66d2b52aab1f612e4610a8cba345" gracePeriod=2 Nov 24 11:30:22 crc kubenswrapper[4678]: I1124 11:30:22.519794 4678 generic.go:334] "Generic (PLEG): container finished" podID="0876a61c-8136-4da5-9683-0d0ae61de9b7" containerID="b5222b771fa1bc94acf865342c9bd906f6ca66d2b52aab1f612e4610a8cba345" exitCode=0 Nov 24 11:30:22 crc kubenswrapper[4678]: I1124 11:30:22.519981 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jqjh" event={"ID":"0876a61c-8136-4da5-9683-0d0ae61de9b7","Type":"ContainerDied","Data":"b5222b771fa1bc94acf865342c9bd906f6ca66d2b52aab1f612e4610a8cba345"} Nov 24 11:30:22 crc kubenswrapper[4678]: I1124 11:30:22.621829 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:22 crc kubenswrapper[4678]: I1124 11:30:22.763821 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0876a61c-8136-4da5-9683-0d0ae61de9b7-catalog-content\") pod \"0876a61c-8136-4da5-9683-0d0ae61de9b7\" (UID: \"0876a61c-8136-4da5-9683-0d0ae61de9b7\") " Nov 24 11:30:22 crc kubenswrapper[4678]: I1124 11:30:22.763966 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mxf2\" (UniqueName: \"kubernetes.io/projected/0876a61c-8136-4da5-9683-0d0ae61de9b7-kube-api-access-8mxf2\") pod \"0876a61c-8136-4da5-9683-0d0ae61de9b7\" (UID: \"0876a61c-8136-4da5-9683-0d0ae61de9b7\") " Nov 24 11:30:22 crc kubenswrapper[4678]: I1124 11:30:22.764065 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0876a61c-8136-4da5-9683-0d0ae61de9b7-utilities\") pod \"0876a61c-8136-4da5-9683-0d0ae61de9b7\" (UID: \"0876a61c-8136-4da5-9683-0d0ae61de9b7\") " Nov 24 11:30:22 crc kubenswrapper[4678]: I1124 11:30:22.765883 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0876a61c-8136-4da5-9683-0d0ae61de9b7-utilities" (OuterVolumeSpecName: "utilities") pod "0876a61c-8136-4da5-9683-0d0ae61de9b7" (UID: "0876a61c-8136-4da5-9683-0d0ae61de9b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:22 crc kubenswrapper[4678]: I1124 11:30:22.774040 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0876a61c-8136-4da5-9683-0d0ae61de9b7-kube-api-access-8mxf2" (OuterVolumeSpecName: "kube-api-access-8mxf2") pod "0876a61c-8136-4da5-9683-0d0ae61de9b7" (UID: "0876a61c-8136-4da5-9683-0d0ae61de9b7"). InnerVolumeSpecName "kube-api-access-8mxf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:22 crc kubenswrapper[4678]: I1124 11:30:22.826212 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0876a61c-8136-4da5-9683-0d0ae61de9b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0876a61c-8136-4da5-9683-0d0ae61de9b7" (UID: "0876a61c-8136-4da5-9683-0d0ae61de9b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:22 crc kubenswrapper[4678]: I1124 11:30:22.866955 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mxf2\" (UniqueName: \"kubernetes.io/projected/0876a61c-8136-4da5-9683-0d0ae61de9b7-kube-api-access-8mxf2\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:22 crc kubenswrapper[4678]: I1124 11:30:22.867007 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0876a61c-8136-4da5-9683-0d0ae61de9b7-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:22 crc kubenswrapper[4678]: I1124 11:30:22.867024 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0876a61c-8136-4da5-9683-0d0ae61de9b7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:23 crc kubenswrapper[4678]: I1124 11:30:23.531009 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jqjh" event={"ID":"0876a61c-8136-4da5-9683-0d0ae61de9b7","Type":"ContainerDied","Data":"2198b02f6ca7b05d2c2d746292c910f33f55e0aabbd96fb7f142012eaf7d5745"} Nov 24 11:30:23 crc kubenswrapper[4678]: I1124 11:30:23.531077 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6jqjh" Nov 24 11:30:23 crc kubenswrapper[4678]: I1124 11:30:23.531091 4678 scope.go:117] "RemoveContainer" containerID="b5222b771fa1bc94acf865342c9bd906f6ca66d2b52aab1f612e4610a8cba345" Nov 24 11:30:23 crc kubenswrapper[4678]: I1124 11:30:23.551744 4678 scope.go:117] "RemoveContainer" containerID="8159ec20b58193409693735d9b428e2949748b6908745b1b36e11d9d7b4e21e3" Nov 24 11:30:23 crc kubenswrapper[4678]: I1124 11:30:23.566822 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6jqjh"] Nov 24 11:30:23 crc kubenswrapper[4678]: I1124 11:30:23.577763 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6jqjh"] Nov 24 11:30:23 crc kubenswrapper[4678]: I1124 11:30:23.585146 4678 scope.go:117] "RemoveContainer" containerID="7c6b92790847c1259dd20b37c19a8c595a5c599ecc8f6ee75cbb74c375e1d04c" Nov 24 11:30:23 crc kubenswrapper[4678]: I1124 11:30:23.912084 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0876a61c-8136-4da5-9683-0d0ae61de9b7" path="/var/lib/kubelet/pods/0876a61c-8136-4da5-9683-0d0ae61de9b7/volumes" Nov 24 11:30:25 crc kubenswrapper[4678]: I1124 11:30:25.013383 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:25 crc kubenswrapper[4678]: I1124 11:30:25.013950 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:25 crc kubenswrapper[4678]: I1124 11:30:25.072446 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:25 crc kubenswrapper[4678]: I1124 11:30:25.643722 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:27 crc kubenswrapper[4678]: I1124 11:30:27.273405 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4957"] Nov 24 11:30:27 crc kubenswrapper[4678]: I1124 11:30:27.596558 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f4957" podUID="7a423872-9340-4317-826f-3a2fda4a205c" containerName="registry-server" containerID="cri-o://c1af8d5d8f7b66bb984ccd0ffab0300f6f776594de3e0466a70f7118d1730952" gracePeriod=2 Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.069407 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.176726 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a423872-9340-4317-826f-3a2fda4a205c-utilities\") pod \"7a423872-9340-4317-826f-3a2fda4a205c\" (UID: \"7a423872-9340-4317-826f-3a2fda4a205c\") " Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.176811 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcs4z\" (UniqueName: \"kubernetes.io/projected/7a423872-9340-4317-826f-3a2fda4a205c-kube-api-access-bcs4z\") pod \"7a423872-9340-4317-826f-3a2fda4a205c\" (UID: \"7a423872-9340-4317-826f-3a2fda4a205c\") " Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.176930 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a423872-9340-4317-826f-3a2fda4a205c-catalog-content\") pod \"7a423872-9340-4317-826f-3a2fda4a205c\" (UID: \"7a423872-9340-4317-826f-3a2fda4a205c\") " Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.177788 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a423872-9340-4317-826f-3a2fda4a205c-utilities" (OuterVolumeSpecName: "utilities") pod "7a423872-9340-4317-826f-3a2fda4a205c" (UID: "7a423872-9340-4317-826f-3a2fda4a205c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.188150 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a423872-9340-4317-826f-3a2fda4a205c-kube-api-access-bcs4z" (OuterVolumeSpecName: "kube-api-access-bcs4z") pod "7a423872-9340-4317-826f-3a2fda4a205c" (UID: "7a423872-9340-4317-826f-3a2fda4a205c"). InnerVolumeSpecName "kube-api-access-bcs4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.195209 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a423872-9340-4317-826f-3a2fda4a205c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a423872-9340-4317-826f-3a2fda4a205c" (UID: "7a423872-9340-4317-826f-3a2fda4a205c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.279285 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a423872-9340-4317-826f-3a2fda4a205c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.279327 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcs4z\" (UniqueName: \"kubernetes.io/projected/7a423872-9340-4317-826f-3a2fda4a205c-kube-api-access-bcs4z\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.279342 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a423872-9340-4317-826f-3a2fda4a205c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.332220 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm"] Nov 24 11:30:28 crc kubenswrapper[4678]: E1124 11:30:28.332568 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0876a61c-8136-4da5-9683-0d0ae61de9b7" containerName="registry-server" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.332588 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="0876a61c-8136-4da5-9683-0d0ae61de9b7" containerName="registry-server" Nov 24 11:30:28 crc kubenswrapper[4678]: E1124 11:30:28.332613 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0876a61c-8136-4da5-9683-0d0ae61de9b7" containerName="extract-utilities" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.332619 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="0876a61c-8136-4da5-9683-0d0ae61de9b7" containerName="extract-utilities" Nov 24 11:30:28 crc kubenswrapper[4678]: E1124 11:30:28.332628 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a423872-9340-4317-826f-3a2fda4a205c" containerName="registry-server" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.332634 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a423872-9340-4317-826f-3a2fda4a205c" containerName="registry-server" Nov 24 11:30:28 crc kubenswrapper[4678]: E1124 11:30:28.332648 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0876a61c-8136-4da5-9683-0d0ae61de9b7" containerName="extract-content" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.332654 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="0876a61c-8136-4da5-9683-0d0ae61de9b7" containerName="extract-content" Nov 24 11:30:28 crc kubenswrapper[4678]: E1124 11:30:28.332664 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a423872-9340-4317-826f-3a2fda4a205c" containerName="extract-content" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.332688 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a423872-9340-4317-826f-3a2fda4a205c" containerName="extract-content" Nov 24 11:30:28 crc kubenswrapper[4678]: E1124 11:30:28.332698 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a423872-9340-4317-826f-3a2fda4a205c" containerName="extract-utilities" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.332704 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a423872-9340-4317-826f-3a2fda4a205c" containerName="extract-utilities" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.332847 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="0876a61c-8136-4da5-9683-0d0ae61de9b7" containerName="registry-server" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.332858 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a423872-9340-4317-826f-3a2fda4a205c" containerName="registry-server" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.334010 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.336468 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.348285 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm"] Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.482597 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm\" (UID: \"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.482712 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm\" (UID: \"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.482775 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq9mw\" (UniqueName: \"kubernetes.io/projected/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-kube-api-access-jq9mw\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm\" (UID: \"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.584193 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm\" (UID: \"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.584300 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm\" (UID: \"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.584384 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq9mw\" (UniqueName: \"kubernetes.io/projected/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-kube-api-access-jq9mw\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm\" (UID: \"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.584835 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm\" (UID: \"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.585182 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm\" (UID: \"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.601302 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq9mw\" (UniqueName: \"kubernetes.io/projected/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-kube-api-access-jq9mw\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm\" (UID: \"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.608880 4678 generic.go:334] "Generic (PLEG): container finished" podID="7a423872-9340-4317-826f-3a2fda4a205c" containerID="c1af8d5d8f7b66bb984ccd0ffab0300f6f776594de3e0466a70f7118d1730952" exitCode=0 Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.608942 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4957" event={"ID":"7a423872-9340-4317-826f-3a2fda4a205c","Type":"ContainerDied","Data":"c1af8d5d8f7b66bb984ccd0ffab0300f6f776594de3e0466a70f7118d1730952"} Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.608967 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4957" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.608997 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4957" event={"ID":"7a423872-9340-4317-826f-3a2fda4a205c","Type":"ContainerDied","Data":"2af3378b40db9c876a3a8ed092bd9c544e885aef8d03761bb457bffc89c11ac9"} Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.609021 4678 scope.go:117] "RemoveContainer" containerID="c1af8d5d8f7b66bb984ccd0ffab0300f6f776594de3e0466a70f7118d1730952" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.641133 4678 scope.go:117] "RemoveContainer" containerID="12e0b6772c210bc992ad42ea1a95c46406040cee4a7b950abe84c21ba1a6760d" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.650723 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4957"] Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.654270 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.663283 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4957"] Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.675114 4678 scope.go:117] "RemoveContainer" containerID="e94282c586c8f7ff562779d9d5e939b76cdbf8fd1a0a53420af2e3060ac746f8" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.731149 4678 scope.go:117] "RemoveContainer" containerID="c1af8d5d8f7b66bb984ccd0ffab0300f6f776594de3e0466a70f7118d1730952" Nov 24 11:30:28 crc kubenswrapper[4678]: E1124 11:30:28.736642 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1af8d5d8f7b66bb984ccd0ffab0300f6f776594de3e0466a70f7118d1730952\": container with ID starting with c1af8d5d8f7b66bb984ccd0ffab0300f6f776594de3e0466a70f7118d1730952 not found: ID does not exist" containerID="c1af8d5d8f7b66bb984ccd0ffab0300f6f776594de3e0466a70f7118d1730952" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.736718 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1af8d5d8f7b66bb984ccd0ffab0300f6f776594de3e0466a70f7118d1730952"} err="failed to get container status \"c1af8d5d8f7b66bb984ccd0ffab0300f6f776594de3e0466a70f7118d1730952\": rpc error: code = NotFound desc = could not find container \"c1af8d5d8f7b66bb984ccd0ffab0300f6f776594de3e0466a70f7118d1730952\": container with ID starting with c1af8d5d8f7b66bb984ccd0ffab0300f6f776594de3e0466a70f7118d1730952 not found: ID does not exist" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.736772 4678 scope.go:117] "RemoveContainer" containerID="12e0b6772c210bc992ad42ea1a95c46406040cee4a7b950abe84c21ba1a6760d" Nov 24 11:30:28 crc kubenswrapper[4678]: E1124 11:30:28.737576 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12e0b6772c210bc992ad42ea1a95c46406040cee4a7b950abe84c21ba1a6760d\": container with ID starting with 12e0b6772c210bc992ad42ea1a95c46406040cee4a7b950abe84c21ba1a6760d not found: ID does not exist" containerID="12e0b6772c210bc992ad42ea1a95c46406040cee4a7b950abe84c21ba1a6760d" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.737602 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12e0b6772c210bc992ad42ea1a95c46406040cee4a7b950abe84c21ba1a6760d"} err="failed to get container status \"12e0b6772c210bc992ad42ea1a95c46406040cee4a7b950abe84c21ba1a6760d\": rpc error: code = NotFound desc = could not find container \"12e0b6772c210bc992ad42ea1a95c46406040cee4a7b950abe84c21ba1a6760d\": container with ID starting with 12e0b6772c210bc992ad42ea1a95c46406040cee4a7b950abe84c21ba1a6760d not found: ID does not exist" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.737616 4678 scope.go:117] "RemoveContainer" containerID="e94282c586c8f7ff562779d9d5e939b76cdbf8fd1a0a53420af2e3060ac746f8" Nov 24 11:30:28 crc kubenswrapper[4678]: E1124 11:30:28.738931 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e94282c586c8f7ff562779d9d5e939b76cdbf8fd1a0a53420af2e3060ac746f8\": container with ID starting with e94282c586c8f7ff562779d9d5e939b76cdbf8fd1a0a53420af2e3060ac746f8 not found: ID does not exist" containerID="e94282c586c8f7ff562779d9d5e939b76cdbf8fd1a0a53420af2e3060ac746f8" Nov 24 11:30:28 crc kubenswrapper[4678]: I1124 11:30:28.738999 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e94282c586c8f7ff562779d9d5e939b76cdbf8fd1a0a53420af2e3060ac746f8"} err="failed to get container status \"e94282c586c8f7ff562779d9d5e939b76cdbf8fd1a0a53420af2e3060ac746f8\": rpc error: code = NotFound desc = could not find container \"e94282c586c8f7ff562779d9d5e939b76cdbf8fd1a0a53420af2e3060ac746f8\": container with ID starting with e94282c586c8f7ff562779d9d5e939b76cdbf8fd1a0a53420af2e3060ac746f8 not found: ID does not exist" Nov 24 11:30:29 crc kubenswrapper[4678]: I1124 11:30:29.167190 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm"] Nov 24 11:30:29 crc kubenswrapper[4678]: I1124 11:30:29.619315 4678 generic.go:334] "Generic (PLEG): container finished" podID="a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88" containerID="c0824134463ab75fa4c9be7aea8a99c813aaa4c935d9a02f8378e5ece5466767" exitCode=0 Nov 24 11:30:29 crc kubenswrapper[4678]: I1124 11:30:29.619366 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" event={"ID":"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88","Type":"ContainerDied","Data":"c0824134463ab75fa4c9be7aea8a99c813aaa4c935d9a02f8378e5ece5466767"} Nov 24 11:30:29 crc kubenswrapper[4678]: I1124 11:30:29.619780 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" event={"ID":"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88","Type":"ContainerStarted","Data":"106f6f17ab7369abc2db975def44c869e7d7d76b36ba429f2c6ae61cd7f9a96b"} Nov 24 11:30:29 crc kubenswrapper[4678]: I1124 11:30:29.907161 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a423872-9340-4317-826f-3a2fda4a205c" path="/var/lib/kubelet/pods/7a423872-9340-4317-826f-3a2fda4a205c/volumes" Nov 24 11:30:31 crc kubenswrapper[4678]: I1124 11:30:31.639079 4678 generic.go:334] "Generic (PLEG): container finished" podID="a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88" containerID="ccee051142fd27eac5ff6f7e6dd7f4141c3984c9a1ff73087b6ae06d60e95699" exitCode=0 Nov 24 11:30:31 crc kubenswrapper[4678]: I1124 11:30:31.639169 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" event={"ID":"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88","Type":"ContainerDied","Data":"ccee051142fd27eac5ff6f7e6dd7f4141c3984c9a1ff73087b6ae06d60e95699"} Nov 24 11:30:31 crc kubenswrapper[4678]: I1124 11:30:31.885496 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wxs9n"] Nov 24 11:30:31 crc kubenswrapper[4678]: I1124 11:30:31.887034 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:31 crc kubenswrapper[4678]: I1124 11:30:31.905160 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wxs9n"] Nov 24 11:30:31 crc kubenswrapper[4678]: I1124 11:30:31.948853 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe16acd-4f27-4306-b823-4901ab1b2e68-catalog-content\") pod \"redhat-operators-wxs9n\" (UID: \"1fe16acd-4f27-4306-b823-4901ab1b2e68\") " pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:31 crc kubenswrapper[4678]: I1124 11:30:31.948918 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe16acd-4f27-4306-b823-4901ab1b2e68-utilities\") pod \"redhat-operators-wxs9n\" (UID: \"1fe16acd-4f27-4306-b823-4901ab1b2e68\") " pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:31 crc kubenswrapper[4678]: I1124 11:30:31.949368 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz9p5\" (UniqueName: \"kubernetes.io/projected/1fe16acd-4f27-4306-b823-4901ab1b2e68-kube-api-access-sz9p5\") pod \"redhat-operators-wxs9n\" (UID: \"1fe16acd-4f27-4306-b823-4901ab1b2e68\") " pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:32 crc kubenswrapper[4678]: I1124 11:30:32.050538 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz9p5\" (UniqueName: \"kubernetes.io/projected/1fe16acd-4f27-4306-b823-4901ab1b2e68-kube-api-access-sz9p5\") pod \"redhat-operators-wxs9n\" (UID: \"1fe16acd-4f27-4306-b823-4901ab1b2e68\") " pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:32 crc kubenswrapper[4678]: I1124 11:30:32.050958 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe16acd-4f27-4306-b823-4901ab1b2e68-catalog-content\") pod \"redhat-operators-wxs9n\" (UID: \"1fe16acd-4f27-4306-b823-4901ab1b2e68\") " pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:32 crc kubenswrapper[4678]: I1124 11:30:32.051070 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe16acd-4f27-4306-b823-4901ab1b2e68-utilities\") pod \"redhat-operators-wxs9n\" (UID: \"1fe16acd-4f27-4306-b823-4901ab1b2e68\") " pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:32 crc kubenswrapper[4678]: I1124 11:30:32.051586 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe16acd-4f27-4306-b823-4901ab1b2e68-utilities\") pod \"redhat-operators-wxs9n\" (UID: \"1fe16acd-4f27-4306-b823-4901ab1b2e68\") " pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:32 crc kubenswrapper[4678]: I1124 11:30:32.051662 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe16acd-4f27-4306-b823-4901ab1b2e68-catalog-content\") pod \"redhat-operators-wxs9n\" (UID: \"1fe16acd-4f27-4306-b823-4901ab1b2e68\") " pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:32 crc kubenswrapper[4678]: I1124 11:30:32.078467 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz9p5\" (UniqueName: \"kubernetes.io/projected/1fe16acd-4f27-4306-b823-4901ab1b2e68-kube-api-access-sz9p5\") pod \"redhat-operators-wxs9n\" (UID: \"1fe16acd-4f27-4306-b823-4901ab1b2e68\") " pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:32 crc kubenswrapper[4678]: I1124 11:30:32.241929 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:32 crc kubenswrapper[4678]: I1124 11:30:32.651386 4678 generic.go:334] "Generic (PLEG): container finished" podID="a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88" containerID="846adf5793b0a86936df137afcb2501e310e5b875544e45a708e93c40c3b96cb" exitCode=0 Nov 24 11:30:32 crc kubenswrapper[4678]: I1124 11:30:32.651495 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" event={"ID":"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88","Type":"ContainerDied","Data":"846adf5793b0a86936df137afcb2501e310e5b875544e45a708e93c40c3b96cb"} Nov 24 11:30:32 crc kubenswrapper[4678]: I1124 11:30:32.768004 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wxs9n"] Nov 24 11:30:32 crc kubenswrapper[4678]: W1124 11:30:32.771900 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fe16acd_4f27_4306_b823_4901ab1b2e68.slice/crio-b49a30c382ef5047c92fe23b013c97da5779118ab7a04860b8bdaea599e38c91 WatchSource:0}: Error finding container b49a30c382ef5047c92fe23b013c97da5779118ab7a04860b8bdaea599e38c91: Status 404 returned error can't find the container with id b49a30c382ef5047c92fe23b013c97da5779118ab7a04860b8bdaea599e38c91 Nov 24 11:30:33 crc kubenswrapper[4678]: I1124 11:30:33.670975 4678 generic.go:334] "Generic (PLEG): container finished" podID="1fe16acd-4f27-4306-b823-4901ab1b2e68" containerID="1d366094bceb3ba6f02314305a55768c6995c698ef5e2b5745c8714648a1a421" exitCode=0 Nov 24 11:30:33 crc kubenswrapper[4678]: I1124 11:30:33.671197 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxs9n" event={"ID":"1fe16acd-4f27-4306-b823-4901ab1b2e68","Type":"ContainerDied","Data":"1d366094bceb3ba6f02314305a55768c6995c698ef5e2b5745c8714648a1a421"} Nov 24 11:30:33 crc kubenswrapper[4678]: I1124 11:30:33.672148 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxs9n" event={"ID":"1fe16acd-4f27-4306-b823-4901ab1b2e68","Type":"ContainerStarted","Data":"b49a30c382ef5047c92fe23b013c97da5779118ab7a04860b8bdaea599e38c91"} Nov 24 11:30:34 crc kubenswrapper[4678]: I1124 11:30:34.057703 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" Nov 24 11:30:34 crc kubenswrapper[4678]: I1124 11:30:34.191400 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq9mw\" (UniqueName: \"kubernetes.io/projected/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-kube-api-access-jq9mw\") pod \"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88\" (UID: \"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88\") " Nov 24 11:30:34 crc kubenswrapper[4678]: I1124 11:30:34.191938 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-bundle\") pod \"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88\" (UID: \"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88\") " Nov 24 11:30:34 crc kubenswrapper[4678]: I1124 11:30:34.192062 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-util\") pod \"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88\" (UID: \"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88\") " Nov 24 11:30:34 crc kubenswrapper[4678]: I1124 11:30:34.195209 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-bundle" (OuterVolumeSpecName: "bundle") pod "a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88" (UID: "a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:34 crc kubenswrapper[4678]: I1124 11:30:34.205862 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-kube-api-access-jq9mw" (OuterVolumeSpecName: "kube-api-access-jq9mw") pod "a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88" (UID: "a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88"). InnerVolumeSpecName "kube-api-access-jq9mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:34 crc kubenswrapper[4678]: I1124 11:30:34.220403 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-util" (OuterVolumeSpecName: "util") pod "a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88" (UID: "a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:34 crc kubenswrapper[4678]: I1124 11:30:34.293955 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq9mw\" (UniqueName: \"kubernetes.io/projected/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-kube-api-access-jq9mw\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:34 crc kubenswrapper[4678]: I1124 11:30:34.294248 4678 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:34 crc kubenswrapper[4678]: I1124 11:30:34.294306 4678 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88-util\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:34 crc kubenswrapper[4678]: I1124 11:30:34.683922 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" event={"ID":"a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88","Type":"ContainerDied","Data":"106f6f17ab7369abc2db975def44c869e7d7d76b36ba429f2c6ae61cd7f9a96b"} Nov 24 11:30:34 crc kubenswrapper[4678]: I1124 11:30:34.685772 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="106f6f17ab7369abc2db975def44c869e7d7d76b36ba429f2c6ae61cd7f9a96b" Nov 24 11:30:34 crc kubenswrapper[4678]: I1124 11:30:34.683993 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm" Nov 24 11:30:35 crc kubenswrapper[4678]: I1124 11:30:35.697185 4678 generic.go:334] "Generic (PLEG): container finished" podID="1fe16acd-4f27-4306-b823-4901ab1b2e68" containerID="130edf1d1a67fc69b5de9e01886d2b5f78058340e6425353c90551eb41566e32" exitCode=0 Nov 24 11:30:35 crc kubenswrapper[4678]: I1124 11:30:35.697311 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxs9n" event={"ID":"1fe16acd-4f27-4306-b823-4901ab1b2e68","Type":"ContainerDied","Data":"130edf1d1a67fc69b5de9e01886d2b5f78058340e6425353c90551eb41566e32"} Nov 24 11:30:37 crc kubenswrapper[4678]: I1124 11:30:37.697973 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-nwsr8"] Nov 24 11:30:37 crc kubenswrapper[4678]: E1124 11:30:37.698792 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88" containerName="pull" Nov 24 11:30:37 crc kubenswrapper[4678]: I1124 11:30:37.698809 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88" containerName="pull" Nov 24 11:30:37 crc kubenswrapper[4678]: E1124 11:30:37.698840 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88" containerName="extract" Nov 24 11:30:37 crc kubenswrapper[4678]: I1124 11:30:37.698850 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88" containerName="extract" Nov 24 11:30:37 crc kubenswrapper[4678]: E1124 11:30:37.698859 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88" containerName="util" Nov 24 11:30:37 crc kubenswrapper[4678]: I1124 11:30:37.698868 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88" containerName="util" Nov 24 11:30:37 crc kubenswrapper[4678]: I1124 11:30:37.699027 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88" containerName="extract" Nov 24 11:30:37 crc kubenswrapper[4678]: I1124 11:30:37.699799 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-nwsr8" Nov 24 11:30:37 crc kubenswrapper[4678]: I1124 11:30:37.702430 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 24 11:30:37 crc kubenswrapper[4678]: I1124 11:30:37.705373 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 24 11:30:37 crc kubenswrapper[4678]: I1124 11:30:37.707472 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-64zsk" Nov 24 11:30:37 crc kubenswrapper[4678]: I1124 11:30:37.717712 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxs9n" event={"ID":"1fe16acd-4f27-4306-b823-4901ab1b2e68","Type":"ContainerStarted","Data":"cee65e65a9af5123ebea074fa8a0e4de48debfeba90c0453eda02e9799da0475"} Nov 24 11:30:37 crc kubenswrapper[4678]: I1124 11:30:37.721355 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-nwsr8"] Nov 24 11:30:37 crc kubenswrapper[4678]: I1124 11:30:37.787284 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wxs9n" podStartSLOduration=3.351401794 podStartE2EDuration="6.787259259s" podCreationTimestamp="2025-11-24 11:30:31 +0000 UTC" firstStartedPulling="2025-11-24 11:30:33.673787296 +0000 UTC m=+844.604846935" lastFinishedPulling="2025-11-24 11:30:37.109644761 +0000 UTC m=+848.040704400" observedRunningTime="2025-11-24 11:30:37.779856221 +0000 UTC m=+848.710915940" watchObservedRunningTime="2025-11-24 11:30:37.787259259 +0000 UTC m=+848.718318898" Nov 24 11:30:37 crc kubenswrapper[4678]: I1124 11:30:37.860394 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czfzf\" (UniqueName: \"kubernetes.io/projected/93a91ea7-eb3e-4e3d-b0e7-8fc451fb9106-kube-api-access-czfzf\") pod \"nmstate-operator-557fdffb88-nwsr8\" (UID: \"93a91ea7-eb3e-4e3d-b0e7-8fc451fb9106\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-nwsr8" Nov 24 11:30:37 crc kubenswrapper[4678]: I1124 11:30:37.962638 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfzf\" (UniqueName: \"kubernetes.io/projected/93a91ea7-eb3e-4e3d-b0e7-8fc451fb9106-kube-api-access-czfzf\") pod \"nmstate-operator-557fdffb88-nwsr8\" (UID: \"93a91ea7-eb3e-4e3d-b0e7-8fc451fb9106\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-nwsr8" Nov 24 11:30:37 crc kubenswrapper[4678]: I1124 11:30:37.989951 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czfzf\" (UniqueName: \"kubernetes.io/projected/93a91ea7-eb3e-4e3d-b0e7-8fc451fb9106-kube-api-access-czfzf\") pod \"nmstate-operator-557fdffb88-nwsr8\" (UID: \"93a91ea7-eb3e-4e3d-b0e7-8fc451fb9106\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-nwsr8" Nov 24 11:30:38 crc kubenswrapper[4678]: I1124 11:30:38.018936 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-nwsr8" Nov 24 11:30:38 crc kubenswrapper[4678]: I1124 11:30:38.552651 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-nwsr8"] Nov 24 11:30:38 crc kubenswrapper[4678]: I1124 11:30:38.727178 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-nwsr8" event={"ID":"93a91ea7-eb3e-4e3d-b0e7-8fc451fb9106","Type":"ContainerStarted","Data":"74522f60b4a59510f48a2ff1bb09da3409aa91479751387192bdaa82edb11806"} Nov 24 11:30:41 crc kubenswrapper[4678]: I1124 11:30:41.759851 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-nwsr8" event={"ID":"93a91ea7-eb3e-4e3d-b0e7-8fc451fb9106","Type":"ContainerStarted","Data":"6d8a6f71f8512fd4a201cff1976f386d6c6923c7896237bfe749704d9159d42f"} Nov 24 11:30:41 crc kubenswrapper[4678]: I1124 11:30:41.793924 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-nwsr8" podStartSLOduration=2.151434244 podStartE2EDuration="4.793900739s" podCreationTimestamp="2025-11-24 11:30:37 +0000 UTC" firstStartedPulling="2025-11-24 11:30:38.571913456 +0000 UTC m=+849.502973095" lastFinishedPulling="2025-11-24 11:30:41.214379951 +0000 UTC m=+852.145439590" observedRunningTime="2025-11-24 11:30:41.788025382 +0000 UTC m=+852.719085041" watchObservedRunningTime="2025-11-24 11:30:41.793900739 +0000 UTC m=+852.724960378" Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.242658 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.242785 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.824464 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-xd4vt"] Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.826627 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xd4vt" Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.829096 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-rjjgq" Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.834277 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-xd4vt"] Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.845064 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnbtv\" (UniqueName: \"kubernetes.io/projected/1be5edf4-f534-4d7b-ac82-27c9f7ea1e65-kube-api-access-mnbtv\") pod \"nmstate-metrics-5dcf9c57c5-xd4vt\" (UID: \"1be5edf4-f534-4d7b-ac82-27c9f7ea1e65\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xd4vt" Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.867691 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-bjlbs"] Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.869544 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bjlbs" Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.894974 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-c2tjw"] Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.896569 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-c2tjw" Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.899446 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.943788 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-c2tjw"] Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.949438 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnbtv\" (UniqueName: \"kubernetes.io/projected/1be5edf4-f534-4d7b-ac82-27c9f7ea1e65-kube-api-access-mnbtv\") pod \"nmstate-metrics-5dcf9c57c5-xd4vt\" (UID: \"1be5edf4-f534-4d7b-ac82-27c9f7ea1e65\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xd4vt" Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.949508 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd424\" (UniqueName: \"kubernetes.io/projected/9d6c6722-a205-4130-8e09-ee82c51491a9-kube-api-access-kd424\") pod \"nmstate-webhook-6b89b748d8-c2tjw\" (UID: \"9d6c6722-a205-4130-8e09-ee82c51491a9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-c2tjw" Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.949535 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9d6c6722-a205-4130-8e09-ee82c51491a9-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-c2tjw\" (UID: \"9d6c6722-a205-4130-8e09-ee82c51491a9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-c2tjw" Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.949555 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/40792d21-2a53-4dba-9895-127d9414e802-nmstate-lock\") pod \"nmstate-handler-bjlbs\" (UID: \"40792d21-2a53-4dba-9895-127d9414e802\") " pod="openshift-nmstate/nmstate-handler-bjlbs" Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.949613 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zfrm\" (UniqueName: \"kubernetes.io/projected/40792d21-2a53-4dba-9895-127d9414e802-kube-api-access-2zfrm\") pod \"nmstate-handler-bjlbs\" (UID: \"40792d21-2a53-4dba-9895-127d9414e802\") " pod="openshift-nmstate/nmstate-handler-bjlbs" Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.949636 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/40792d21-2a53-4dba-9895-127d9414e802-dbus-socket\") pod \"nmstate-handler-bjlbs\" (UID: \"40792d21-2a53-4dba-9895-127d9414e802\") " pod="openshift-nmstate/nmstate-handler-bjlbs" Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.949700 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/40792d21-2a53-4dba-9895-127d9414e802-ovs-socket\") pod \"nmstate-handler-bjlbs\" (UID: \"40792d21-2a53-4dba-9895-127d9414e802\") " pod="openshift-nmstate/nmstate-handler-bjlbs" Nov 24 11:30:42 crc kubenswrapper[4678]: I1124 11:30:42.975513 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnbtv\" (UniqueName: \"kubernetes.io/projected/1be5edf4-f534-4d7b-ac82-27c9f7ea1e65-kube-api-access-mnbtv\") pod \"nmstate-metrics-5dcf9c57c5-xd4vt\" (UID: \"1be5edf4-f534-4d7b-ac82-27c9f7ea1e65\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xd4vt" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.031709 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b"] Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.032861 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.041428 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.041656 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-l92hn" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.041729 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.042522 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b"] Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.050868 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9d6c6722-a205-4130-8e09-ee82c51491a9-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-c2tjw\" (UID: \"9d6c6722-a205-4130-8e09-ee82c51491a9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-c2tjw" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.050913 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/40792d21-2a53-4dba-9895-127d9414e802-nmstate-lock\") pod \"nmstate-handler-bjlbs\" (UID: \"40792d21-2a53-4dba-9895-127d9414e802\") " pod="openshift-nmstate/nmstate-handler-bjlbs" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.050987 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zfrm\" (UniqueName: \"kubernetes.io/projected/40792d21-2a53-4dba-9895-127d9414e802-kube-api-access-2zfrm\") pod \"nmstate-handler-bjlbs\" (UID: \"40792d21-2a53-4dba-9895-127d9414e802\") " pod="openshift-nmstate/nmstate-handler-bjlbs" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.051007 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/40792d21-2a53-4dba-9895-127d9414e802-dbus-socket\") pod \"nmstate-handler-bjlbs\" (UID: \"40792d21-2a53-4dba-9895-127d9414e802\") " pod="openshift-nmstate/nmstate-handler-bjlbs" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.051054 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/40792d21-2a53-4dba-9895-127d9414e802-ovs-socket\") pod \"nmstate-handler-bjlbs\" (UID: \"40792d21-2a53-4dba-9895-127d9414e802\") " pod="openshift-nmstate/nmstate-handler-bjlbs" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.051110 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd424\" (UniqueName: \"kubernetes.io/projected/9d6c6722-a205-4130-8e09-ee82c51491a9-kube-api-access-kd424\") pod \"nmstate-webhook-6b89b748d8-c2tjw\" (UID: \"9d6c6722-a205-4130-8e09-ee82c51491a9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-c2tjw" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.053882 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/40792d21-2a53-4dba-9895-127d9414e802-nmstate-lock\") pod \"nmstate-handler-bjlbs\" (UID: \"40792d21-2a53-4dba-9895-127d9414e802\") " pod="openshift-nmstate/nmstate-handler-bjlbs" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.053935 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/40792d21-2a53-4dba-9895-127d9414e802-ovs-socket\") pod \"nmstate-handler-bjlbs\" (UID: \"40792d21-2a53-4dba-9895-127d9414e802\") " pod="openshift-nmstate/nmstate-handler-bjlbs" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.055211 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/40792d21-2a53-4dba-9895-127d9414e802-dbus-socket\") pod \"nmstate-handler-bjlbs\" (UID: \"40792d21-2a53-4dba-9895-127d9414e802\") " pod="openshift-nmstate/nmstate-handler-bjlbs" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.072000 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9d6c6722-a205-4130-8e09-ee82c51491a9-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-c2tjw\" (UID: \"9d6c6722-a205-4130-8e09-ee82c51491a9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-c2tjw" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.083578 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd424\" (UniqueName: \"kubernetes.io/projected/9d6c6722-a205-4130-8e09-ee82c51491a9-kube-api-access-kd424\") pod \"nmstate-webhook-6b89b748d8-c2tjw\" (UID: \"9d6c6722-a205-4130-8e09-ee82c51491a9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-c2tjw" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.095827 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zfrm\" (UniqueName: \"kubernetes.io/projected/40792d21-2a53-4dba-9895-127d9414e802-kube-api-access-2zfrm\") pod \"nmstate-handler-bjlbs\" (UID: \"40792d21-2a53-4dba-9895-127d9414e802\") " pod="openshift-nmstate/nmstate-handler-bjlbs" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.148716 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xd4vt" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.154968 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-82t9b\" (UID: \"81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.155062 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-82t9b\" (UID: \"81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.155112 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf4b2\" (UniqueName: \"kubernetes.io/projected/81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9-kube-api-access-zf4b2\") pod \"nmstate-console-plugin-5874bd7bc5-82t9b\" (UID: \"81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.217918 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bjlbs" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.242396 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-c2tjw" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.251380 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5b6d66f75b-9j4v9"] Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.252388 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.262986 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-oauth-config\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.263118 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-trusted-ca-bundle\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.263208 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-82t9b\" (UID: \"81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.263370 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-serving-cert\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.263412 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-config\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.263443 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-82t9b\" (UID: \"81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b" Nov 24 11:30:43 crc kubenswrapper[4678]: E1124 11:30:43.263487 4678 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 24 11:30:43 crc kubenswrapper[4678]: E1124 11:30:43.263558 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9-plugin-serving-cert podName:81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9 nodeName:}" failed. No retries permitted until 2025-11-24 11:30:43.763537411 +0000 UTC m=+854.694597040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-82t9b" (UID: "81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9") : secret "plugin-serving-cert" not found Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.263504 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m27c\" (UniqueName: \"kubernetes.io/projected/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-kube-api-access-8m27c\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.263857 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf4b2\" (UniqueName: \"kubernetes.io/projected/81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9-kube-api-access-zf4b2\") pod \"nmstate-console-plugin-5874bd7bc5-82t9b\" (UID: \"81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.263944 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-oauth-serving-cert\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.264009 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-service-ca\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.265169 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-82t9b\" (UID: \"81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.285363 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b6d66f75b-9j4v9"] Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.302014 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf4b2\" (UniqueName: \"kubernetes.io/projected/81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9-kube-api-access-zf4b2\") pod \"nmstate-console-plugin-5874bd7bc5-82t9b\" (UID: \"81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b" Nov 24 11:30:43 crc kubenswrapper[4678]: W1124 11:30:43.307870 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40792d21_2a53_4dba_9895_127d9414e802.slice/crio-e384c47679f3e2cceaf30b03bfd93a674ef0a33eefe2b043d2f4bae9bf2a425a WatchSource:0}: Error finding container e384c47679f3e2cceaf30b03bfd93a674ef0a33eefe2b043d2f4bae9bf2a425a: Status 404 returned error can't find the container with id e384c47679f3e2cceaf30b03bfd93a674ef0a33eefe2b043d2f4bae9bf2a425a Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.336273 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wxs9n" podUID="1fe16acd-4f27-4306-b823-4901ab1b2e68" containerName="registry-server" probeResult="failure" output=< Nov 24 11:30:43 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 11:30:43 crc kubenswrapper[4678]: > Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.368576 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-serving-cert\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.368627 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-config\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.368746 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m27c\" (UniqueName: \"kubernetes.io/projected/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-kube-api-access-8m27c\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.368791 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-oauth-serving-cert\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.368823 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-service-ca\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.368877 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-oauth-config\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.368912 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-trusted-ca-bundle\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.370317 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-service-ca\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.370423 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-config\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.371118 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-oauth-serving-cert\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.373301 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-trusted-ca-bundle\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.373759 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-serving-cert\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.377636 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-oauth-config\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.395991 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m27c\" (UniqueName: \"kubernetes.io/projected/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-kube-api-access-8m27c\") pod \"console-5b6d66f75b-9j4v9\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.589236 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.772258 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-xd4vt"] Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.784630 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bjlbs" event={"ID":"40792d21-2a53-4dba-9895-127d9414e802","Type":"ContainerStarted","Data":"e384c47679f3e2cceaf30b03bfd93a674ef0a33eefe2b043d2f4bae9bf2a425a"} Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.808322 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-82t9b\" (UID: \"81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.820684 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-82t9b\" (UID: \"81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b" Nov 24 11:30:43 crc kubenswrapper[4678]: I1124 11:30:43.912087 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-c2tjw"] Nov 24 11:30:44 crc kubenswrapper[4678]: I1124 11:30:44.045142 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b" Nov 24 11:30:44 crc kubenswrapper[4678]: I1124 11:30:44.218079 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b6d66f75b-9j4v9"] Nov 24 11:30:44 crc kubenswrapper[4678]: W1124 11:30:44.229865 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd91b5ecf_edd7_4914_b8d0_4dbae32548f6.slice/crio-c39a344b59f1c60ac1b1034f7967548428bd075fb6b7084bf3a73d189fa9e2e1 WatchSource:0}: Error finding container c39a344b59f1c60ac1b1034f7967548428bd075fb6b7084bf3a73d189fa9e2e1: Status 404 returned error can't find the container with id c39a344b59f1c60ac1b1034f7967548428bd075fb6b7084bf3a73d189fa9e2e1 Nov 24 11:30:44 crc kubenswrapper[4678]: I1124 11:30:44.535786 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b"] Nov 24 11:30:44 crc kubenswrapper[4678]: I1124 11:30:44.797056 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b" event={"ID":"81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9","Type":"ContainerStarted","Data":"57ac83ac07206031ac4951d6bc9fddc8623ae11673701bdfe5f8ca44495a6539"} Nov 24 11:30:44 crc kubenswrapper[4678]: I1124 11:30:44.799314 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xd4vt" event={"ID":"1be5edf4-f534-4d7b-ac82-27c9f7ea1e65","Type":"ContainerStarted","Data":"08ddd1fb62be56bd4843f7ffb82d783be445afa3fde3067fd1dd896675d82531"} Nov 24 11:30:44 crc kubenswrapper[4678]: I1124 11:30:44.801909 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b6d66f75b-9j4v9" event={"ID":"d91b5ecf-edd7-4914-b8d0-4dbae32548f6","Type":"ContainerStarted","Data":"f3accefc14b1fca3e456d3e93b22c172eacc395613fb3dc30dc00b8b3764a51f"} Nov 24 11:30:44 crc kubenswrapper[4678]: I1124 11:30:44.801965 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b6d66f75b-9j4v9" event={"ID":"d91b5ecf-edd7-4914-b8d0-4dbae32548f6","Type":"ContainerStarted","Data":"c39a344b59f1c60ac1b1034f7967548428bd075fb6b7084bf3a73d189fa9e2e1"} Nov 24 11:30:44 crc kubenswrapper[4678]: I1124 11:30:44.804646 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-c2tjw" event={"ID":"9d6c6722-a205-4130-8e09-ee82c51491a9","Type":"ContainerStarted","Data":"e00764b357ec44195c565809bf741130d5171ea18c996c447085373cdc71e471"} Nov 24 11:30:44 crc kubenswrapper[4678]: I1124 11:30:44.833749 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5b6d66f75b-9j4v9" podStartSLOduration=1.833726237 podStartE2EDuration="1.833726237s" podCreationTimestamp="2025-11-24 11:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:30:44.826123325 +0000 UTC m=+855.757182984" watchObservedRunningTime="2025-11-24 11:30:44.833726237 +0000 UTC m=+855.764785876" Nov 24 11:30:46 crc kubenswrapper[4678]: I1124 11:30:46.831065 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-c2tjw" event={"ID":"9d6c6722-a205-4130-8e09-ee82c51491a9","Type":"ContainerStarted","Data":"066deaa396cce84f7118e157c2ac730e2a096206555448aff3cddcf97b09deef"} Nov 24 11:30:46 crc kubenswrapper[4678]: I1124 11:30:46.835835 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xd4vt" event={"ID":"1be5edf4-f534-4d7b-ac82-27c9f7ea1e65","Type":"ContainerStarted","Data":"042bb2ea563c0dd14da14084724c2e745a55a950c3e110b71822adc020ab8900"} Nov 24 11:30:46 crc kubenswrapper[4678]: I1124 11:30:46.837415 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bjlbs" event={"ID":"40792d21-2a53-4dba-9895-127d9414e802","Type":"ContainerStarted","Data":"b1b19c435de13e193fabc7d8895e37f8e7a1b49b60e11549814b969d7294e65c"} Nov 24 11:30:46 crc kubenswrapper[4678]: I1124 11:30:46.838815 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-bjlbs" Nov 24 11:30:46 crc kubenswrapper[4678]: I1124 11:30:46.859368 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-bjlbs" podStartSLOduration=1.865119518 podStartE2EDuration="4.859341408s" podCreationTimestamp="2025-11-24 11:30:42 +0000 UTC" firstStartedPulling="2025-11-24 11:30:43.323812601 +0000 UTC m=+854.254872240" lastFinishedPulling="2025-11-24 11:30:46.318034491 +0000 UTC m=+857.249094130" observedRunningTime="2025-11-24 11:30:46.857949701 +0000 UTC m=+857.789009360" watchObservedRunningTime="2025-11-24 11:30:46.859341408 +0000 UTC m=+857.790401067" Nov 24 11:30:47 crc kubenswrapper[4678]: I1124 11:30:47.871324 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-c2tjw" podStartSLOduration=3.474960925 podStartE2EDuration="5.871296486s" podCreationTimestamp="2025-11-24 11:30:42 +0000 UTC" firstStartedPulling="2025-11-24 11:30:43.921418832 +0000 UTC m=+854.852478471" lastFinishedPulling="2025-11-24 11:30:46.317754363 +0000 UTC m=+857.248814032" observedRunningTime="2025-11-24 11:30:47.865975094 +0000 UTC m=+858.797034733" watchObservedRunningTime="2025-11-24 11:30:47.871296486 +0000 UTC m=+858.802356165" Nov 24 11:30:48 crc kubenswrapper[4678]: I1124 11:30:48.860755 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b" event={"ID":"81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9","Type":"ContainerStarted","Data":"7d93570c6e2ed4aefcd5dc82fdeeb4c783efdecda0317208a44207e1bec30412"} Nov 24 11:30:48 crc kubenswrapper[4678]: I1124 11:30:48.861571 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-c2tjw" Nov 24 11:30:48 crc kubenswrapper[4678]: I1124 11:30:48.889054 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-82t9b" podStartSLOduration=3.475313414 podStartE2EDuration="6.889029238s" podCreationTimestamp="2025-11-24 11:30:42 +0000 UTC" firstStartedPulling="2025-11-24 11:30:44.547170945 +0000 UTC m=+855.478230584" lastFinishedPulling="2025-11-24 11:30:47.960886769 +0000 UTC m=+858.891946408" observedRunningTime="2025-11-24 11:30:48.884350152 +0000 UTC m=+859.815409791" watchObservedRunningTime="2025-11-24 11:30:48.889029238 +0000 UTC m=+859.820088877" Nov 24 11:30:50 crc kubenswrapper[4678]: I1124 11:30:50.884465 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xd4vt" event={"ID":"1be5edf4-f534-4d7b-ac82-27c9f7ea1e65","Type":"ContainerStarted","Data":"6755eb3428b4356570275e40cc0ca10e49e3a1feba13ce279e5dd51622db9949"} Nov 24 11:30:50 crc kubenswrapper[4678]: I1124 11:30:50.921132 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-xd4vt" podStartSLOduration=3.051412501 podStartE2EDuration="8.92109342s" podCreationTimestamp="2025-11-24 11:30:42 +0000 UTC" firstStartedPulling="2025-11-24 11:30:43.825945622 +0000 UTC m=+854.757005251" lastFinishedPulling="2025-11-24 11:30:49.695626531 +0000 UTC m=+860.626686170" observedRunningTime="2025-11-24 11:30:50.912820689 +0000 UTC m=+861.843880358" watchObservedRunningTime="2025-11-24 11:30:50.92109342 +0000 UTC m=+861.852153079" Nov 24 11:30:52 crc kubenswrapper[4678]: I1124 11:30:52.320876 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:52 crc kubenswrapper[4678]: I1124 11:30:52.376934 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:52 crc kubenswrapper[4678]: I1124 11:30:52.558435 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wxs9n"] Nov 24 11:30:53 crc kubenswrapper[4678]: I1124 11:30:53.260163 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-bjlbs" Nov 24 11:30:53 crc kubenswrapper[4678]: I1124 11:30:53.591506 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:53 crc kubenswrapper[4678]: I1124 11:30:53.591569 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:53 crc kubenswrapper[4678]: I1124 11:30:53.597854 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:53 crc kubenswrapper[4678]: I1124 11:30:53.910387 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wxs9n" podUID="1fe16acd-4f27-4306-b823-4901ab1b2e68" containerName="registry-server" containerID="cri-o://cee65e65a9af5123ebea074fa8a0e4de48debfeba90c0453eda02e9799da0475" gracePeriod=2 Nov 24 11:30:53 crc kubenswrapper[4678]: I1124 11:30:53.915523 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:53.999872 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-9c8475f4f-bf2zx"] Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.346733 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.383425 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe16acd-4f27-4306-b823-4901ab1b2e68-catalog-content\") pod \"1fe16acd-4f27-4306-b823-4901ab1b2e68\" (UID: \"1fe16acd-4f27-4306-b823-4901ab1b2e68\") " Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.383801 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz9p5\" (UniqueName: \"kubernetes.io/projected/1fe16acd-4f27-4306-b823-4901ab1b2e68-kube-api-access-sz9p5\") pod \"1fe16acd-4f27-4306-b823-4901ab1b2e68\" (UID: \"1fe16acd-4f27-4306-b823-4901ab1b2e68\") " Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.383870 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe16acd-4f27-4306-b823-4901ab1b2e68-utilities\") pod \"1fe16acd-4f27-4306-b823-4901ab1b2e68\" (UID: \"1fe16acd-4f27-4306-b823-4901ab1b2e68\") " Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.386239 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fe16acd-4f27-4306-b823-4901ab1b2e68-utilities" (OuterVolumeSpecName: "utilities") pod "1fe16acd-4f27-4306-b823-4901ab1b2e68" (UID: "1fe16acd-4f27-4306-b823-4901ab1b2e68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.394583 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fe16acd-4f27-4306-b823-4901ab1b2e68-kube-api-access-sz9p5" (OuterVolumeSpecName: "kube-api-access-sz9p5") pod "1fe16acd-4f27-4306-b823-4901ab1b2e68" (UID: "1fe16acd-4f27-4306-b823-4901ab1b2e68"). InnerVolumeSpecName "kube-api-access-sz9p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.474163 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fe16acd-4f27-4306-b823-4901ab1b2e68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fe16acd-4f27-4306-b823-4901ab1b2e68" (UID: "1fe16acd-4f27-4306-b823-4901ab1b2e68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.487049 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz9p5\" (UniqueName: \"kubernetes.io/projected/1fe16acd-4f27-4306-b823-4901ab1b2e68-kube-api-access-sz9p5\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.487094 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe16acd-4f27-4306-b823-4901ab1b2e68-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.487109 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe16acd-4f27-4306-b823-4901ab1b2e68-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.919952 4678 generic.go:334] "Generic (PLEG): container finished" podID="1fe16acd-4f27-4306-b823-4901ab1b2e68" containerID="cee65e65a9af5123ebea074fa8a0e4de48debfeba90c0453eda02e9799da0475" exitCode=0 Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.920070 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxs9n" event={"ID":"1fe16acd-4f27-4306-b823-4901ab1b2e68","Type":"ContainerDied","Data":"cee65e65a9af5123ebea074fa8a0e4de48debfeba90c0453eda02e9799da0475"} Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.920557 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxs9n" event={"ID":"1fe16acd-4f27-4306-b823-4901ab1b2e68","Type":"ContainerDied","Data":"b49a30c382ef5047c92fe23b013c97da5779118ab7a04860b8bdaea599e38c91"} Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.920095 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxs9n" Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.920601 4678 scope.go:117] "RemoveContainer" containerID="cee65e65a9af5123ebea074fa8a0e4de48debfeba90c0453eda02e9799da0475" Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.943372 4678 scope.go:117] "RemoveContainer" containerID="130edf1d1a67fc69b5de9e01886d2b5f78058340e6425353c90551eb41566e32" Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.953475 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wxs9n"] Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.960991 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wxs9n"] Nov 24 11:30:54 crc kubenswrapper[4678]: I1124 11:30:54.978646 4678 scope.go:117] "RemoveContainer" containerID="1d366094bceb3ba6f02314305a55768c6995c698ef5e2b5745c8714648a1a421" Nov 24 11:30:55 crc kubenswrapper[4678]: I1124 11:30:55.010012 4678 scope.go:117] "RemoveContainer" containerID="cee65e65a9af5123ebea074fa8a0e4de48debfeba90c0453eda02e9799da0475" Nov 24 11:30:55 crc kubenswrapper[4678]: E1124 11:30:55.010537 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cee65e65a9af5123ebea074fa8a0e4de48debfeba90c0453eda02e9799da0475\": container with ID starting with cee65e65a9af5123ebea074fa8a0e4de48debfeba90c0453eda02e9799da0475 not found: ID does not exist" containerID="cee65e65a9af5123ebea074fa8a0e4de48debfeba90c0453eda02e9799da0475" Nov 24 11:30:55 crc kubenswrapper[4678]: I1124 11:30:55.010597 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cee65e65a9af5123ebea074fa8a0e4de48debfeba90c0453eda02e9799da0475"} err="failed to get container status \"cee65e65a9af5123ebea074fa8a0e4de48debfeba90c0453eda02e9799da0475\": rpc error: code = NotFound desc = could not find container \"cee65e65a9af5123ebea074fa8a0e4de48debfeba90c0453eda02e9799da0475\": container with ID starting with cee65e65a9af5123ebea074fa8a0e4de48debfeba90c0453eda02e9799da0475 not found: ID does not exist" Nov 24 11:30:55 crc kubenswrapper[4678]: I1124 11:30:55.010633 4678 scope.go:117] "RemoveContainer" containerID="130edf1d1a67fc69b5de9e01886d2b5f78058340e6425353c90551eb41566e32" Nov 24 11:30:55 crc kubenswrapper[4678]: E1124 11:30:55.011008 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"130edf1d1a67fc69b5de9e01886d2b5f78058340e6425353c90551eb41566e32\": container with ID starting with 130edf1d1a67fc69b5de9e01886d2b5f78058340e6425353c90551eb41566e32 not found: ID does not exist" containerID="130edf1d1a67fc69b5de9e01886d2b5f78058340e6425353c90551eb41566e32" Nov 24 11:30:55 crc kubenswrapper[4678]: I1124 11:30:55.011078 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"130edf1d1a67fc69b5de9e01886d2b5f78058340e6425353c90551eb41566e32"} err="failed to get container status \"130edf1d1a67fc69b5de9e01886d2b5f78058340e6425353c90551eb41566e32\": rpc error: code = NotFound desc = could not find container \"130edf1d1a67fc69b5de9e01886d2b5f78058340e6425353c90551eb41566e32\": container with ID starting with 130edf1d1a67fc69b5de9e01886d2b5f78058340e6425353c90551eb41566e32 not found: ID does not exist" Nov 24 11:30:55 crc kubenswrapper[4678]: I1124 11:30:55.011131 4678 scope.go:117] "RemoveContainer" containerID="1d366094bceb3ba6f02314305a55768c6995c698ef5e2b5745c8714648a1a421" Nov 24 11:30:55 crc kubenswrapper[4678]: E1124 11:30:55.011557 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d366094bceb3ba6f02314305a55768c6995c698ef5e2b5745c8714648a1a421\": container with ID starting with 1d366094bceb3ba6f02314305a55768c6995c698ef5e2b5745c8714648a1a421 not found: ID does not exist" containerID="1d366094bceb3ba6f02314305a55768c6995c698ef5e2b5745c8714648a1a421" Nov 24 11:30:55 crc kubenswrapper[4678]: I1124 11:30:55.011589 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d366094bceb3ba6f02314305a55768c6995c698ef5e2b5745c8714648a1a421"} err="failed to get container status \"1d366094bceb3ba6f02314305a55768c6995c698ef5e2b5745c8714648a1a421\": rpc error: code = NotFound desc = could not find container \"1d366094bceb3ba6f02314305a55768c6995c698ef5e2b5745c8714648a1a421\": container with ID starting with 1d366094bceb3ba6f02314305a55768c6995c698ef5e2b5745c8714648a1a421 not found: ID does not exist" Nov 24 11:30:55 crc kubenswrapper[4678]: I1124 11:30:55.907384 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fe16acd-4f27-4306-b823-4901ab1b2e68" path="/var/lib/kubelet/pods/1fe16acd-4f27-4306-b823-4901ab1b2e68/volumes" Nov 24 11:31:00 crc kubenswrapper[4678]: I1124 11:31:00.296846 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:31:00 crc kubenswrapper[4678]: I1124 11:31:00.297710 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:31:03 crc kubenswrapper[4678]: I1124 11:31:03.253011 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-c2tjw" Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.056154 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-9c8475f4f-bf2zx" podUID="5e7b135b-2235-4b47-b8f5-a44f4c91a099" containerName="console" containerID="cri-o://2e65b299ccf4fa933d27102e319530e928265a8c5af93839dcad365757453004" gracePeriod=15 Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.546358 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-9c8475f4f-bf2zx_5e7b135b-2235-4b47-b8f5-a44f4c91a099/console/0.log" Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.546692 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.648346 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-service-ca\") pod \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.648411 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmk6q\" (UniqueName: \"kubernetes.io/projected/5e7b135b-2235-4b47-b8f5-a44f4c91a099-kube-api-access-jmk6q\") pod \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.648488 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-serving-cert\") pod \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.648554 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-config\") pod \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.648718 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-oauth-config\") pod \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.648760 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-trusted-ca-bundle\") pod \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.648786 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-oauth-serving-cert\") pod \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\" (UID: \"5e7b135b-2235-4b47-b8f5-a44f4c91a099\") " Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.649666 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-service-ca" (OuterVolumeSpecName: "service-ca") pod "5e7b135b-2235-4b47-b8f5-a44f4c91a099" (UID: "5e7b135b-2235-4b47-b8f5-a44f4c91a099"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.649830 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5e7b135b-2235-4b47-b8f5-a44f4c91a099" (UID: "5e7b135b-2235-4b47-b8f5-a44f4c91a099"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.649865 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-config" (OuterVolumeSpecName: "console-config") pod "5e7b135b-2235-4b47-b8f5-a44f4c91a099" (UID: "5e7b135b-2235-4b47-b8f5-a44f4c91a099"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.650095 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5e7b135b-2235-4b47-b8f5-a44f4c91a099" (UID: "5e7b135b-2235-4b47-b8f5-a44f4c91a099"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.655272 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5e7b135b-2235-4b47-b8f5-a44f4c91a099" (UID: "5e7b135b-2235-4b47-b8f5-a44f4c91a099"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.656130 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5e7b135b-2235-4b47-b8f5-a44f4c91a099" (UID: "5e7b135b-2235-4b47-b8f5-a44f4c91a099"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.687311 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e7b135b-2235-4b47-b8f5-a44f4c91a099-kube-api-access-jmk6q" (OuterVolumeSpecName: "kube-api-access-jmk6q") pod "5e7b135b-2235-4b47-b8f5-a44f4c91a099" (UID: "5e7b135b-2235-4b47-b8f5-a44f4c91a099"). InnerVolumeSpecName "kube-api-access-jmk6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.751159 4678 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.751194 4678 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.751231 4678 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.751246 4678 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.751255 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmk6q\" (UniqueName: \"kubernetes.io/projected/5e7b135b-2235-4b47-b8f5-a44f4c91a099-kube-api-access-jmk6q\") on node \"crc\" DevicePath \"\"" Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.751267 4678 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:31:19 crc kubenswrapper[4678]: I1124 11:31:19.751275 4678 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e7b135b-2235-4b47-b8f5-a44f4c91a099-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:31:20 crc kubenswrapper[4678]: I1124 11:31:20.160651 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-9c8475f4f-bf2zx_5e7b135b-2235-4b47-b8f5-a44f4c91a099/console/0.log" Nov 24 11:31:20 crc kubenswrapper[4678]: I1124 11:31:20.160737 4678 generic.go:334] "Generic (PLEG): container finished" podID="5e7b135b-2235-4b47-b8f5-a44f4c91a099" containerID="2e65b299ccf4fa933d27102e319530e928265a8c5af93839dcad365757453004" exitCode=2 Nov 24 11:31:20 crc kubenswrapper[4678]: I1124 11:31:20.160779 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9c8475f4f-bf2zx" event={"ID":"5e7b135b-2235-4b47-b8f5-a44f4c91a099","Type":"ContainerDied","Data":"2e65b299ccf4fa933d27102e319530e928265a8c5af93839dcad365757453004"} Nov 24 11:31:20 crc kubenswrapper[4678]: I1124 11:31:20.160824 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9c8475f4f-bf2zx" event={"ID":"5e7b135b-2235-4b47-b8f5-a44f4c91a099","Type":"ContainerDied","Data":"a45d68f1c245ae6870cfaa00309116cd9c0d92157cf2f31b786e0265331fb1d9"} Nov 24 11:31:20 crc kubenswrapper[4678]: I1124 11:31:20.160851 4678 scope.go:117] "RemoveContainer" containerID="2e65b299ccf4fa933d27102e319530e928265a8c5af93839dcad365757453004" Nov 24 11:31:20 crc kubenswrapper[4678]: I1124 11:31:20.160848 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9c8475f4f-bf2zx" Nov 24 11:31:20 crc kubenswrapper[4678]: I1124 11:31:20.188940 4678 scope.go:117] "RemoveContainer" containerID="2e65b299ccf4fa933d27102e319530e928265a8c5af93839dcad365757453004" Nov 24 11:31:20 crc kubenswrapper[4678]: E1124 11:31:20.189984 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e65b299ccf4fa933d27102e319530e928265a8c5af93839dcad365757453004\": container with ID starting with 2e65b299ccf4fa933d27102e319530e928265a8c5af93839dcad365757453004 not found: ID does not exist" containerID="2e65b299ccf4fa933d27102e319530e928265a8c5af93839dcad365757453004" Nov 24 11:31:20 crc kubenswrapper[4678]: I1124 11:31:20.190102 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e65b299ccf4fa933d27102e319530e928265a8c5af93839dcad365757453004"} err="failed to get container status \"2e65b299ccf4fa933d27102e319530e928265a8c5af93839dcad365757453004\": rpc error: code = NotFound desc = could not find container \"2e65b299ccf4fa933d27102e319530e928265a8c5af93839dcad365757453004\": container with ID starting with 2e65b299ccf4fa933d27102e319530e928265a8c5af93839dcad365757453004 not found: ID does not exist" Nov 24 11:31:20 crc kubenswrapper[4678]: I1124 11:31:20.192692 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-9c8475f4f-bf2zx"] Nov 24 11:31:20 crc kubenswrapper[4678]: I1124 11:31:20.201031 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-9c8475f4f-bf2zx"] Nov 24 11:31:21 crc kubenswrapper[4678]: I1124 11:31:21.906757 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e7b135b-2235-4b47-b8f5-a44f4c91a099" path="/var/lib/kubelet/pods/5e7b135b-2235-4b47-b8f5-a44f4c91a099/volumes" Nov 24 11:31:30 crc kubenswrapper[4678]: I1124 11:31:30.297143 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:31:30 crc kubenswrapper[4678]: I1124 11:31:30.297935 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.715122 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv"] Nov 24 11:31:33 crc kubenswrapper[4678]: E1124 11:31:33.716046 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe16acd-4f27-4306-b823-4901ab1b2e68" containerName="registry-server" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.716058 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe16acd-4f27-4306-b823-4901ab1b2e68" containerName="registry-server" Nov 24 11:31:33 crc kubenswrapper[4678]: E1124 11:31:33.716071 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e7b135b-2235-4b47-b8f5-a44f4c91a099" containerName="console" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.716077 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e7b135b-2235-4b47-b8f5-a44f4c91a099" containerName="console" Nov 24 11:31:33 crc kubenswrapper[4678]: E1124 11:31:33.716098 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe16acd-4f27-4306-b823-4901ab1b2e68" containerName="extract-utilities" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.716106 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe16acd-4f27-4306-b823-4901ab1b2e68" containerName="extract-utilities" Nov 24 11:31:33 crc kubenswrapper[4678]: E1124 11:31:33.716119 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe16acd-4f27-4306-b823-4901ab1b2e68" containerName="extract-content" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.716125 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe16acd-4f27-4306-b823-4901ab1b2e68" containerName="extract-content" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.716268 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e7b135b-2235-4b47-b8f5-a44f4c91a099" containerName="console" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.716285 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe16acd-4f27-4306-b823-4901ab1b2e68" containerName="registry-server" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.717276 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.723676 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.737830 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv"] Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.832026 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv\" (UID: \"9988e41f-4dd1-473b-b0cd-4c7456b08c8d\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.832650 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv\" (UID: \"9988e41f-4dd1-473b-b0cd-4c7456b08c8d\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.832830 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln89c\" (UniqueName: \"kubernetes.io/projected/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-kube-api-access-ln89c\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv\" (UID: \"9988e41f-4dd1-473b-b0cd-4c7456b08c8d\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.934704 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv\" (UID: \"9988e41f-4dd1-473b-b0cd-4c7456b08c8d\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.934767 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln89c\" (UniqueName: \"kubernetes.io/projected/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-kube-api-access-ln89c\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv\" (UID: \"9988e41f-4dd1-473b-b0cd-4c7456b08c8d\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.934822 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv\" (UID: \"9988e41f-4dd1-473b-b0cd-4c7456b08c8d\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.935398 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv\" (UID: \"9988e41f-4dd1-473b-b0cd-4c7456b08c8d\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.935518 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv\" (UID: \"9988e41f-4dd1-473b-b0cd-4c7456b08c8d\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" Nov 24 11:31:33 crc kubenswrapper[4678]: I1124 11:31:33.959149 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln89c\" (UniqueName: \"kubernetes.io/projected/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-kube-api-access-ln89c\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv\" (UID: \"9988e41f-4dd1-473b-b0cd-4c7456b08c8d\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" Nov 24 11:31:34 crc kubenswrapper[4678]: I1124 11:31:34.055021 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" Nov 24 11:31:34 crc kubenswrapper[4678]: I1124 11:31:34.602119 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv"] Nov 24 11:31:35 crc kubenswrapper[4678]: I1124 11:31:35.315242 4678 generic.go:334] "Generic (PLEG): container finished" podID="9988e41f-4dd1-473b-b0cd-4c7456b08c8d" containerID="66c85156b1ffe8da3f1f06da004195d02b5753da45794e4101beb26df7f5964c" exitCode=0 Nov 24 11:31:35 crc kubenswrapper[4678]: I1124 11:31:35.315376 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" event={"ID":"9988e41f-4dd1-473b-b0cd-4c7456b08c8d","Type":"ContainerDied","Data":"66c85156b1ffe8da3f1f06da004195d02b5753da45794e4101beb26df7f5964c"} Nov 24 11:31:35 crc kubenswrapper[4678]: I1124 11:31:35.318642 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" event={"ID":"9988e41f-4dd1-473b-b0cd-4c7456b08c8d","Type":"ContainerStarted","Data":"ade1b47a94252268e4ef652d6bec592ba93ffca3aa9c786f437a6504060d9900"} Nov 24 11:31:35 crc kubenswrapper[4678]: I1124 11:31:35.317111 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:31:37 crc kubenswrapper[4678]: I1124 11:31:37.339160 4678 generic.go:334] "Generic (PLEG): container finished" podID="9988e41f-4dd1-473b-b0cd-4c7456b08c8d" containerID="95e3ab2247f26215d253823b3b7776f10ad7c57f198c7fc9f6365ecf7ae15d70" exitCode=0 Nov 24 11:31:37 crc kubenswrapper[4678]: I1124 11:31:37.339977 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" event={"ID":"9988e41f-4dd1-473b-b0cd-4c7456b08c8d","Type":"ContainerDied","Data":"95e3ab2247f26215d253823b3b7776f10ad7c57f198c7fc9f6365ecf7ae15d70"} Nov 24 11:31:38 crc kubenswrapper[4678]: I1124 11:31:38.352389 4678 generic.go:334] "Generic (PLEG): container finished" podID="9988e41f-4dd1-473b-b0cd-4c7456b08c8d" containerID="5dd6d40160bae9f4d7f3d18c07a4063543245364709ec25e5eb8543c97cbcc3e" exitCode=0 Nov 24 11:31:38 crc kubenswrapper[4678]: I1124 11:31:38.353243 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" event={"ID":"9988e41f-4dd1-473b-b0cd-4c7456b08c8d","Type":"ContainerDied","Data":"5dd6d40160bae9f4d7f3d18c07a4063543245364709ec25e5eb8543c97cbcc3e"} Nov 24 11:31:39 crc kubenswrapper[4678]: I1124 11:31:39.715153 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" Nov 24 11:31:39 crc kubenswrapper[4678]: I1124 11:31:39.852389 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln89c\" (UniqueName: \"kubernetes.io/projected/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-kube-api-access-ln89c\") pod \"9988e41f-4dd1-473b-b0cd-4c7456b08c8d\" (UID: \"9988e41f-4dd1-473b-b0cd-4c7456b08c8d\") " Nov 24 11:31:39 crc kubenswrapper[4678]: I1124 11:31:39.852469 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-bundle\") pod \"9988e41f-4dd1-473b-b0cd-4c7456b08c8d\" (UID: \"9988e41f-4dd1-473b-b0cd-4c7456b08c8d\") " Nov 24 11:31:39 crc kubenswrapper[4678]: I1124 11:31:39.852574 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-util\") pod \"9988e41f-4dd1-473b-b0cd-4c7456b08c8d\" (UID: \"9988e41f-4dd1-473b-b0cd-4c7456b08c8d\") " Nov 24 11:31:39 crc kubenswrapper[4678]: I1124 11:31:39.853846 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-bundle" (OuterVolumeSpecName: "bundle") pod "9988e41f-4dd1-473b-b0cd-4c7456b08c8d" (UID: "9988e41f-4dd1-473b-b0cd-4c7456b08c8d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:31:39 crc kubenswrapper[4678]: I1124 11:31:39.858539 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-kube-api-access-ln89c" (OuterVolumeSpecName: "kube-api-access-ln89c") pod "9988e41f-4dd1-473b-b0cd-4c7456b08c8d" (UID: "9988e41f-4dd1-473b-b0cd-4c7456b08c8d"). InnerVolumeSpecName "kube-api-access-ln89c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:31:39 crc kubenswrapper[4678]: I1124 11:31:39.867164 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-util" (OuterVolumeSpecName: "util") pod "9988e41f-4dd1-473b-b0cd-4c7456b08c8d" (UID: "9988e41f-4dd1-473b-b0cd-4c7456b08c8d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:31:39 crc kubenswrapper[4678]: I1124 11:31:39.954952 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln89c\" (UniqueName: \"kubernetes.io/projected/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-kube-api-access-ln89c\") on node \"crc\" DevicePath \"\"" Nov 24 11:31:39 crc kubenswrapper[4678]: I1124 11:31:39.954998 4678 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:31:39 crc kubenswrapper[4678]: I1124 11:31:39.955009 4678 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9988e41f-4dd1-473b-b0cd-4c7456b08c8d-util\") on node \"crc\" DevicePath \"\"" Nov 24 11:31:40 crc kubenswrapper[4678]: I1124 11:31:40.375046 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" event={"ID":"9988e41f-4dd1-473b-b0cd-4c7456b08c8d","Type":"ContainerDied","Data":"ade1b47a94252268e4ef652d6bec592ba93ffca3aa9c786f437a6504060d9900"} Nov 24 11:31:40 crc kubenswrapper[4678]: I1124 11:31:40.375091 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ade1b47a94252268e4ef652d6bec592ba93ffca3aa9c786f437a6504060d9900" Nov 24 11:31:40 crc kubenswrapper[4678]: I1124 11:31:40.375163 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.580891 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62"] Nov 24 11:31:52 crc kubenswrapper[4678]: E1124 11:31:52.584616 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9988e41f-4dd1-473b-b0cd-4c7456b08c8d" containerName="pull" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.584637 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="9988e41f-4dd1-473b-b0cd-4c7456b08c8d" containerName="pull" Nov 24 11:31:52 crc kubenswrapper[4678]: E1124 11:31:52.584652 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9988e41f-4dd1-473b-b0cd-4c7456b08c8d" containerName="extract" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.584661 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="9988e41f-4dd1-473b-b0cd-4c7456b08c8d" containerName="extract" Nov 24 11:31:52 crc kubenswrapper[4678]: E1124 11:31:52.584710 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9988e41f-4dd1-473b-b0cd-4c7456b08c8d" containerName="util" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.584718 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="9988e41f-4dd1-473b-b0cd-4c7456b08c8d" containerName="util" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.584857 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="9988e41f-4dd1-473b-b0cd-4c7456b08c8d" containerName="extract" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.585802 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.588684 4678 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.592708 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.592815 4678 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.592991 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.595917 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59194b72-d4c7-47a0-8cb2-b61ea454172c-apiservice-cert\") pod \"metallb-operator-controller-manager-57c67cd666-lmh62\" (UID: \"59194b72-d4c7-47a0-8cb2-b61ea454172c\") " pod="metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.596189 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gghs\" (UniqueName: \"kubernetes.io/projected/59194b72-d4c7-47a0-8cb2-b61ea454172c-kube-api-access-8gghs\") pod \"metallb-operator-controller-manager-57c67cd666-lmh62\" (UID: \"59194b72-d4c7-47a0-8cb2-b61ea454172c\") " pod="metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.596365 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59194b72-d4c7-47a0-8cb2-b61ea454172c-webhook-cert\") pod \"metallb-operator-controller-manager-57c67cd666-lmh62\" (UID: \"59194b72-d4c7-47a0-8cb2-b61ea454172c\") " pod="metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.604478 4678 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-4brxn" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.652349 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62"] Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.698660 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59194b72-d4c7-47a0-8cb2-b61ea454172c-webhook-cert\") pod \"metallb-operator-controller-manager-57c67cd666-lmh62\" (UID: \"59194b72-d4c7-47a0-8cb2-b61ea454172c\") " pod="metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.698838 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59194b72-d4c7-47a0-8cb2-b61ea454172c-apiservice-cert\") pod \"metallb-operator-controller-manager-57c67cd666-lmh62\" (UID: \"59194b72-d4c7-47a0-8cb2-b61ea454172c\") " pod="metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.698882 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gghs\" (UniqueName: \"kubernetes.io/projected/59194b72-d4c7-47a0-8cb2-b61ea454172c-kube-api-access-8gghs\") pod \"metallb-operator-controller-manager-57c67cd666-lmh62\" (UID: \"59194b72-d4c7-47a0-8cb2-b61ea454172c\") " pod="metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.707582 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59194b72-d4c7-47a0-8cb2-b61ea454172c-apiservice-cert\") pod \"metallb-operator-controller-manager-57c67cd666-lmh62\" (UID: \"59194b72-d4c7-47a0-8cb2-b61ea454172c\") " pod="metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.718227 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gghs\" (UniqueName: \"kubernetes.io/projected/59194b72-d4c7-47a0-8cb2-b61ea454172c-kube-api-access-8gghs\") pod \"metallb-operator-controller-manager-57c67cd666-lmh62\" (UID: \"59194b72-d4c7-47a0-8cb2-b61ea454172c\") " pod="metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.724343 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59194b72-d4c7-47a0-8cb2-b61ea454172c-webhook-cert\") pod \"metallb-operator-controller-manager-57c67cd666-lmh62\" (UID: \"59194b72-d4c7-47a0-8cb2-b61ea454172c\") " pod="metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.833863 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9"] Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.834877 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.839463 4678 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.839751 4678 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-c7qb9" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.841442 4678 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.880192 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9"] Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.902630 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21c5aca7-95a6-4f08-96b8-4beca12e41cf-webhook-cert\") pod \"metallb-operator-webhook-server-5bccc67d9d-ndzx9\" (UID: \"21c5aca7-95a6-4f08-96b8-4beca12e41cf\") " pod="metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.903223 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21c5aca7-95a6-4f08-96b8-4beca12e41cf-apiservice-cert\") pod \"metallb-operator-webhook-server-5bccc67d9d-ndzx9\" (UID: \"21c5aca7-95a6-4f08-96b8-4beca12e41cf\") " pod="metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.903306 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nzvp\" (UniqueName: \"kubernetes.io/projected/21c5aca7-95a6-4f08-96b8-4beca12e41cf-kube-api-access-2nzvp\") pod \"metallb-operator-webhook-server-5bccc67d9d-ndzx9\" (UID: \"21c5aca7-95a6-4f08-96b8-4beca12e41cf\") " pod="metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9" Nov 24 11:31:52 crc kubenswrapper[4678]: I1124 11:31:52.912225 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62" Nov 24 11:31:53 crc kubenswrapper[4678]: I1124 11:31:53.004166 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21c5aca7-95a6-4f08-96b8-4beca12e41cf-apiservice-cert\") pod \"metallb-operator-webhook-server-5bccc67d9d-ndzx9\" (UID: \"21c5aca7-95a6-4f08-96b8-4beca12e41cf\") " pod="metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9" Nov 24 11:31:53 crc kubenswrapper[4678]: I1124 11:31:53.004235 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nzvp\" (UniqueName: \"kubernetes.io/projected/21c5aca7-95a6-4f08-96b8-4beca12e41cf-kube-api-access-2nzvp\") pod \"metallb-operator-webhook-server-5bccc67d9d-ndzx9\" (UID: \"21c5aca7-95a6-4f08-96b8-4beca12e41cf\") " pod="metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9" Nov 24 11:31:53 crc kubenswrapper[4678]: I1124 11:31:53.004290 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21c5aca7-95a6-4f08-96b8-4beca12e41cf-webhook-cert\") pod \"metallb-operator-webhook-server-5bccc67d9d-ndzx9\" (UID: \"21c5aca7-95a6-4f08-96b8-4beca12e41cf\") " pod="metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9" Nov 24 11:31:53 crc kubenswrapper[4678]: I1124 11:31:53.008741 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21c5aca7-95a6-4f08-96b8-4beca12e41cf-webhook-cert\") pod \"metallb-operator-webhook-server-5bccc67d9d-ndzx9\" (UID: \"21c5aca7-95a6-4f08-96b8-4beca12e41cf\") " pod="metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9" Nov 24 11:31:53 crc kubenswrapper[4678]: I1124 11:31:53.009524 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21c5aca7-95a6-4f08-96b8-4beca12e41cf-apiservice-cert\") pod \"metallb-operator-webhook-server-5bccc67d9d-ndzx9\" (UID: \"21c5aca7-95a6-4f08-96b8-4beca12e41cf\") " pod="metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9" Nov 24 11:31:53 crc kubenswrapper[4678]: I1124 11:31:53.031940 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nzvp\" (UniqueName: \"kubernetes.io/projected/21c5aca7-95a6-4f08-96b8-4beca12e41cf-kube-api-access-2nzvp\") pod \"metallb-operator-webhook-server-5bccc67d9d-ndzx9\" (UID: \"21c5aca7-95a6-4f08-96b8-4beca12e41cf\") " pod="metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9" Nov 24 11:31:53 crc kubenswrapper[4678]: I1124 11:31:53.168633 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9" Nov 24 11:31:53 crc kubenswrapper[4678]: I1124 11:31:53.404523 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62"] Nov 24 11:31:53 crc kubenswrapper[4678]: W1124 11:31:53.501622 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21c5aca7_95a6_4f08_96b8_4beca12e41cf.slice/crio-9eadb9c5de6f10571f231ff9937c0ccb0a52f51b392f87a1488a1855b0bdd7b7 WatchSource:0}: Error finding container 9eadb9c5de6f10571f231ff9937c0ccb0a52f51b392f87a1488a1855b0bdd7b7: Status 404 returned error can't find the container with id 9eadb9c5de6f10571f231ff9937c0ccb0a52f51b392f87a1488a1855b0bdd7b7 Nov 24 11:31:53 crc kubenswrapper[4678]: I1124 11:31:53.503814 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9"] Nov 24 11:31:53 crc kubenswrapper[4678]: I1124 11:31:53.510544 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62" event={"ID":"59194b72-d4c7-47a0-8cb2-b61ea454172c","Type":"ContainerStarted","Data":"d6cb99cc34d8ba55533124408805292b80b2df2e5a91110f17fd3a4099cfa0b8"} Nov 24 11:31:54 crc kubenswrapper[4678]: I1124 11:31:54.518604 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9" event={"ID":"21c5aca7-95a6-4f08-96b8-4beca12e41cf","Type":"ContainerStarted","Data":"9eadb9c5de6f10571f231ff9937c0ccb0a52f51b392f87a1488a1855b0bdd7b7"} Nov 24 11:31:57 crc kubenswrapper[4678]: I1124 11:31:57.577344 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62" event={"ID":"59194b72-d4c7-47a0-8cb2-b61ea454172c","Type":"ContainerStarted","Data":"b5fe4639b46a037924769aef237e4d4249e7316e9fbcee0413b547033d41465a"} Nov 24 11:31:57 crc kubenswrapper[4678]: I1124 11:31:57.578099 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62" Nov 24 11:31:57 crc kubenswrapper[4678]: I1124 11:31:57.606756 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62" podStartSLOduration=2.149714835 podStartE2EDuration="5.606722196s" podCreationTimestamp="2025-11-24 11:31:52 +0000 UTC" firstStartedPulling="2025-11-24 11:31:53.419393169 +0000 UTC m=+924.350452808" lastFinishedPulling="2025-11-24 11:31:56.87640054 +0000 UTC m=+927.807460169" observedRunningTime="2025-11-24 11:31:57.604147577 +0000 UTC m=+928.535207226" watchObservedRunningTime="2025-11-24 11:31:57.606722196 +0000 UTC m=+928.537781875" Nov 24 11:31:59 crc kubenswrapper[4678]: I1124 11:31:59.604002 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9" event={"ID":"21c5aca7-95a6-4f08-96b8-4beca12e41cf","Type":"ContainerStarted","Data":"e375d53147c3b8da07ea39e4c8d15833d6fcfb760bac1441ee3dcdcb51e05f60"} Nov 24 11:31:59 crc kubenswrapper[4678]: I1124 11:31:59.604758 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9" Nov 24 11:31:59 crc kubenswrapper[4678]: I1124 11:31:59.627125 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9" podStartSLOduration=2.095803645 podStartE2EDuration="7.627104187s" podCreationTimestamp="2025-11-24 11:31:52 +0000 UTC" firstStartedPulling="2025-11-24 11:31:53.505095488 +0000 UTC m=+924.436155127" lastFinishedPulling="2025-11-24 11:31:59.03639603 +0000 UTC m=+929.967455669" observedRunningTime="2025-11-24 11:31:59.621564549 +0000 UTC m=+930.552624218" watchObservedRunningTime="2025-11-24 11:31:59.627104187 +0000 UTC m=+930.558163826" Nov 24 11:32:00 crc kubenswrapper[4678]: I1124 11:32:00.297425 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:32:00 crc kubenswrapper[4678]: I1124 11:32:00.297486 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:32:00 crc kubenswrapper[4678]: I1124 11:32:00.297533 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:32:00 crc kubenswrapper[4678]: I1124 11:32:00.298275 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1197580eb03eaddc7b9dc08dbab8ba6891f416c80d33f4fc3fc03e3113ad80b4"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:32:00 crc kubenswrapper[4678]: I1124 11:32:00.298344 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://1197580eb03eaddc7b9dc08dbab8ba6891f416c80d33f4fc3fc03e3113ad80b4" gracePeriod=600 Nov 24 11:32:00 crc kubenswrapper[4678]: I1124 11:32:00.615727 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="1197580eb03eaddc7b9dc08dbab8ba6891f416c80d33f4fc3fc03e3113ad80b4" exitCode=0 Nov 24 11:32:00 crc kubenswrapper[4678]: I1124 11:32:00.615808 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"1197580eb03eaddc7b9dc08dbab8ba6891f416c80d33f4fc3fc03e3113ad80b4"} Nov 24 11:32:00 crc kubenswrapper[4678]: I1124 11:32:00.616203 4678 scope.go:117] "RemoveContainer" containerID="2bfe74ad72b1070a6c7e462d710c234790fcd2a6fff50a06b17d2f1671decd08" Nov 24 11:32:01 crc kubenswrapper[4678]: I1124 11:32:01.631238 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"ae5ad808ee433867f6ed22b16c3cabcd9999e49e8fb7ad6c2494c4e5839c237e"} Nov 24 11:32:13 crc kubenswrapper[4678]: I1124 11:32:13.302207 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5bccc67d9d-ndzx9" Nov 24 11:32:32 crc kubenswrapper[4678]: I1124 11:32:32.915176 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-57c67cd666-lmh62" Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.909442 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-mmwxw"] Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.915305 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.917994 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.918389 4678 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-cjfpq" Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.918518 4678 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.923276 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-266dw"] Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.925108 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-266dw" Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.927182 4678 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.956286 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-266dw"] Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.971561 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/763766a9-0307-4ba2-8545-26a817b1f410-frr-conf\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.971618 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/763766a9-0307-4ba2-8545-26a817b1f410-reloader\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.971767 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/205823f2-053a-4c0b-9e24-debc45170c30-cert\") pod \"frr-k8s-webhook-server-6998585d5-266dw\" (UID: \"205823f2-053a-4c0b-9e24-debc45170c30\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-266dw" Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.971833 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/763766a9-0307-4ba2-8545-26a817b1f410-metrics-certs\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.971873 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxlgx\" (UniqueName: \"kubernetes.io/projected/763766a9-0307-4ba2-8545-26a817b1f410-kube-api-access-kxlgx\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.971912 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/763766a9-0307-4ba2-8545-26a817b1f410-metrics\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.971944 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/763766a9-0307-4ba2-8545-26a817b1f410-frr-startup\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.972013 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/763766a9-0307-4ba2-8545-26a817b1f410-frr-sockets\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:33 crc kubenswrapper[4678]: I1124 11:32:33.972086 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbkmk\" (UniqueName: \"kubernetes.io/projected/205823f2-053a-4c0b-9e24-debc45170c30-kube-api-access-cbkmk\") pod \"frr-k8s-webhook-server-6998585d5-266dw\" (UID: \"205823f2-053a-4c0b-9e24-debc45170c30\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-266dw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.031084 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-p9g7l"] Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.032366 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-p9g7l" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.036408 4678 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.038102 4678 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.038867 4678 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-w2mfn" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.038929 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.058793 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-89x9s"] Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.060725 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-89x9s" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.062722 4678 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.074452 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6d53fc3-a79e-4249-86ab-e7588111b6ba-cert\") pod \"controller-6c7b4b5f48-89x9s\" (UID: \"d6d53fc3-a79e-4249-86ab-e7588111b6ba\") " pod="metallb-system/controller-6c7b4b5f48-89x9s" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.074570 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/205823f2-053a-4c0b-9e24-debc45170c30-cert\") pod \"frr-k8s-webhook-server-6998585d5-266dw\" (UID: \"205823f2-053a-4c0b-9e24-debc45170c30\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-266dw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.074595 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmftv\" (UniqueName: \"kubernetes.io/projected/d6d53fc3-a79e-4249-86ab-e7588111b6ba-kube-api-access-xmftv\") pod \"controller-6c7b4b5f48-89x9s\" (UID: \"d6d53fc3-a79e-4249-86ab-e7588111b6ba\") " pod="metallb-system/controller-6c7b4b5f48-89x9s" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.074620 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l6vp\" (UniqueName: \"kubernetes.io/projected/9737f178-41ad-4deb-9d13-4245d6a31868-kube-api-access-4l6vp\") pod \"speaker-p9g7l\" (UID: \"9737f178-41ad-4deb-9d13-4245d6a31868\") " pod="metallb-system/speaker-p9g7l" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.074644 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/763766a9-0307-4ba2-8545-26a817b1f410-metrics-certs\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.074683 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxlgx\" (UniqueName: \"kubernetes.io/projected/763766a9-0307-4ba2-8545-26a817b1f410-kube-api-access-kxlgx\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.074710 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9737f178-41ad-4deb-9d13-4245d6a31868-metallb-excludel2\") pod \"speaker-p9g7l\" (UID: \"9737f178-41ad-4deb-9d13-4245d6a31868\") " pod="metallb-system/speaker-p9g7l" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.074741 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/763766a9-0307-4ba2-8545-26a817b1f410-metrics\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.074767 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/763766a9-0307-4ba2-8545-26a817b1f410-frr-startup\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:34 crc kubenswrapper[4678]: E1124 11:32:34.074805 4678 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 24 11:32:34 crc kubenswrapper[4678]: E1124 11:32:34.074911 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/205823f2-053a-4c0b-9e24-debc45170c30-cert podName:205823f2-053a-4c0b-9e24-debc45170c30 nodeName:}" failed. No retries permitted until 2025-11-24 11:32:34.574891657 +0000 UTC m=+965.505951296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/205823f2-053a-4c0b-9e24-debc45170c30-cert") pod "frr-k8s-webhook-server-6998585d5-266dw" (UID: "205823f2-053a-4c0b-9e24-debc45170c30") : secret "frr-k8s-webhook-server-cert" not found Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.074814 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/763766a9-0307-4ba2-8545-26a817b1f410-frr-sockets\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:34 crc kubenswrapper[4678]: E1124 11:32:34.075121 4678 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.075172 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9737f178-41ad-4deb-9d13-4245d6a31868-metrics-certs\") pod \"speaker-p9g7l\" (UID: \"9737f178-41ad-4deb-9d13-4245d6a31868\") " pod="metallb-system/speaker-p9g7l" Nov 24 11:32:34 crc kubenswrapper[4678]: E1124 11:32:34.075206 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/763766a9-0307-4ba2-8545-26a817b1f410-metrics-certs podName:763766a9-0307-4ba2-8545-26a817b1f410 nodeName:}" failed. No retries permitted until 2025-11-24 11:32:34.575181834 +0000 UTC m=+965.506241473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/763766a9-0307-4ba2-8545-26a817b1f410-metrics-certs") pod "frr-k8s-mmwxw" (UID: "763766a9-0307-4ba2-8545-26a817b1f410") : secret "frr-k8s-certs-secret" not found Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.075289 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/763766a9-0307-4ba2-8545-26a817b1f410-frr-sockets\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.075324 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9737f178-41ad-4deb-9d13-4245d6a31868-memberlist\") pod \"speaker-p9g7l\" (UID: \"9737f178-41ad-4deb-9d13-4245d6a31868\") " pod="metallb-system/speaker-p9g7l" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.075338 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/763766a9-0307-4ba2-8545-26a817b1f410-metrics\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.075436 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbkmk\" (UniqueName: \"kubernetes.io/projected/205823f2-053a-4c0b-9e24-debc45170c30-kube-api-access-cbkmk\") pod \"frr-k8s-webhook-server-6998585d5-266dw\" (UID: \"205823f2-053a-4c0b-9e24-debc45170c30\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-266dw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.075529 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6d53fc3-a79e-4249-86ab-e7588111b6ba-metrics-certs\") pod \"controller-6c7b4b5f48-89x9s\" (UID: \"d6d53fc3-a79e-4249-86ab-e7588111b6ba\") " pod="metallb-system/controller-6c7b4b5f48-89x9s" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.075574 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/763766a9-0307-4ba2-8545-26a817b1f410-frr-conf\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.075599 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/763766a9-0307-4ba2-8545-26a817b1f410-reloader\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.076155 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/763766a9-0307-4ba2-8545-26a817b1f410-reloader\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.076278 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/763766a9-0307-4ba2-8545-26a817b1f410-frr-conf\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.077172 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/763766a9-0307-4ba2-8545-26a817b1f410-frr-startup\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.093225 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-89x9s"] Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.108111 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxlgx\" (UniqueName: \"kubernetes.io/projected/763766a9-0307-4ba2-8545-26a817b1f410-kube-api-access-kxlgx\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.112183 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbkmk\" (UniqueName: \"kubernetes.io/projected/205823f2-053a-4c0b-9e24-debc45170c30-kube-api-access-cbkmk\") pod \"frr-k8s-webhook-server-6998585d5-266dw\" (UID: \"205823f2-053a-4c0b-9e24-debc45170c30\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-266dw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.176488 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6d53fc3-a79e-4249-86ab-e7588111b6ba-metrics-certs\") pod \"controller-6c7b4b5f48-89x9s\" (UID: \"d6d53fc3-a79e-4249-86ab-e7588111b6ba\") " pod="metallb-system/controller-6c7b4b5f48-89x9s" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.176936 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6d53fc3-a79e-4249-86ab-e7588111b6ba-cert\") pod \"controller-6c7b4b5f48-89x9s\" (UID: \"d6d53fc3-a79e-4249-86ab-e7588111b6ba\") " pod="metallb-system/controller-6c7b4b5f48-89x9s" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.177047 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmftv\" (UniqueName: \"kubernetes.io/projected/d6d53fc3-a79e-4249-86ab-e7588111b6ba-kube-api-access-xmftv\") pod \"controller-6c7b4b5f48-89x9s\" (UID: \"d6d53fc3-a79e-4249-86ab-e7588111b6ba\") " pod="metallb-system/controller-6c7b4b5f48-89x9s" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.177128 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l6vp\" (UniqueName: \"kubernetes.io/projected/9737f178-41ad-4deb-9d13-4245d6a31868-kube-api-access-4l6vp\") pod \"speaker-p9g7l\" (UID: \"9737f178-41ad-4deb-9d13-4245d6a31868\") " pod="metallb-system/speaker-p9g7l" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.177240 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9737f178-41ad-4deb-9d13-4245d6a31868-metallb-excludel2\") pod \"speaker-p9g7l\" (UID: \"9737f178-41ad-4deb-9d13-4245d6a31868\") " pod="metallb-system/speaker-p9g7l" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.177367 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9737f178-41ad-4deb-9d13-4245d6a31868-metrics-certs\") pod \"speaker-p9g7l\" (UID: \"9737f178-41ad-4deb-9d13-4245d6a31868\") " pod="metallb-system/speaker-p9g7l" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.177461 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9737f178-41ad-4deb-9d13-4245d6a31868-memberlist\") pod \"speaker-p9g7l\" (UID: \"9737f178-41ad-4deb-9d13-4245d6a31868\") " pod="metallb-system/speaker-p9g7l" Nov 24 11:32:34 crc kubenswrapper[4678]: E1124 11:32:34.176731 4678 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Nov 24 11:32:34 crc kubenswrapper[4678]: E1124 11:32:34.177831 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6d53fc3-a79e-4249-86ab-e7588111b6ba-metrics-certs podName:d6d53fc3-a79e-4249-86ab-e7588111b6ba nodeName:}" failed. No retries permitted until 2025-11-24 11:32:34.677810705 +0000 UTC m=+965.608870344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6d53fc3-a79e-4249-86ab-e7588111b6ba-metrics-certs") pod "controller-6c7b4b5f48-89x9s" (UID: "d6d53fc3-a79e-4249-86ab-e7588111b6ba") : secret "controller-certs-secret" not found Nov 24 11:32:34 crc kubenswrapper[4678]: E1124 11:32:34.177462 4678 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 24 11:32:34 crc kubenswrapper[4678]: E1124 11:32:34.177942 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9737f178-41ad-4deb-9d13-4245d6a31868-metrics-certs podName:9737f178-41ad-4deb-9d13-4245d6a31868 nodeName:}" failed. No retries permitted until 2025-11-24 11:32:34.677917348 +0000 UTC m=+965.608976987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9737f178-41ad-4deb-9d13-4245d6a31868-metrics-certs") pod "speaker-p9g7l" (UID: "9737f178-41ad-4deb-9d13-4245d6a31868") : secret "speaker-certs-secret" not found Nov 24 11:32:34 crc kubenswrapper[4678]: E1124 11:32:34.177508 4678 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 11:32:34 crc kubenswrapper[4678]: E1124 11:32:34.178012 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9737f178-41ad-4deb-9d13-4245d6a31868-memberlist podName:9737f178-41ad-4deb-9d13-4245d6a31868 nodeName:}" failed. No retries permitted until 2025-11-24 11:32:34.67800477 +0000 UTC m=+965.609064409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9737f178-41ad-4deb-9d13-4245d6a31868-memberlist") pod "speaker-p9g7l" (UID: "9737f178-41ad-4deb-9d13-4245d6a31868") : secret "metallb-memberlist" not found Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.178286 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9737f178-41ad-4deb-9d13-4245d6a31868-metallb-excludel2\") pod \"speaker-p9g7l\" (UID: \"9737f178-41ad-4deb-9d13-4245d6a31868\") " pod="metallb-system/speaker-p9g7l" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.184047 4678 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.192367 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6d53fc3-a79e-4249-86ab-e7588111b6ba-cert\") pod \"controller-6c7b4b5f48-89x9s\" (UID: \"d6d53fc3-a79e-4249-86ab-e7588111b6ba\") " pod="metallb-system/controller-6c7b4b5f48-89x9s" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.207712 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmftv\" (UniqueName: \"kubernetes.io/projected/d6d53fc3-a79e-4249-86ab-e7588111b6ba-kube-api-access-xmftv\") pod \"controller-6c7b4b5f48-89x9s\" (UID: \"d6d53fc3-a79e-4249-86ab-e7588111b6ba\") " pod="metallb-system/controller-6c7b4b5f48-89x9s" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.208695 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l6vp\" (UniqueName: \"kubernetes.io/projected/9737f178-41ad-4deb-9d13-4245d6a31868-kube-api-access-4l6vp\") pod \"speaker-p9g7l\" (UID: \"9737f178-41ad-4deb-9d13-4245d6a31868\") " pod="metallb-system/speaker-p9g7l" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.589196 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/205823f2-053a-4c0b-9e24-debc45170c30-cert\") pod \"frr-k8s-webhook-server-6998585d5-266dw\" (UID: \"205823f2-053a-4c0b-9e24-debc45170c30\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-266dw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.589276 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/763766a9-0307-4ba2-8545-26a817b1f410-metrics-certs\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.594924 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/763766a9-0307-4ba2-8545-26a817b1f410-metrics-certs\") pod \"frr-k8s-mmwxw\" (UID: \"763766a9-0307-4ba2-8545-26a817b1f410\") " pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.595331 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/205823f2-053a-4c0b-9e24-debc45170c30-cert\") pod \"frr-k8s-webhook-server-6998585d5-266dw\" (UID: \"205823f2-053a-4c0b-9e24-debc45170c30\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-266dw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.691344 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6d53fc3-a79e-4249-86ab-e7588111b6ba-metrics-certs\") pod \"controller-6c7b4b5f48-89x9s\" (UID: \"d6d53fc3-a79e-4249-86ab-e7588111b6ba\") " pod="metallb-system/controller-6c7b4b5f48-89x9s" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.691534 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9737f178-41ad-4deb-9d13-4245d6a31868-metrics-certs\") pod \"speaker-p9g7l\" (UID: \"9737f178-41ad-4deb-9d13-4245d6a31868\") " pod="metallb-system/speaker-p9g7l" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.691581 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9737f178-41ad-4deb-9d13-4245d6a31868-memberlist\") pod \"speaker-p9g7l\" (UID: \"9737f178-41ad-4deb-9d13-4245d6a31868\") " pod="metallb-system/speaker-p9g7l" Nov 24 11:32:34 crc kubenswrapper[4678]: E1124 11:32:34.691778 4678 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 11:32:34 crc kubenswrapper[4678]: E1124 11:32:34.691834 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9737f178-41ad-4deb-9d13-4245d6a31868-memberlist podName:9737f178-41ad-4deb-9d13-4245d6a31868 nodeName:}" failed. No retries permitted until 2025-11-24 11:32:35.691815283 +0000 UTC m=+966.622874932 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9737f178-41ad-4deb-9d13-4245d6a31868-memberlist") pod "speaker-p9g7l" (UID: "9737f178-41ad-4deb-9d13-4245d6a31868") : secret "metallb-memberlist" not found Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.697886 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9737f178-41ad-4deb-9d13-4245d6a31868-metrics-certs\") pod \"speaker-p9g7l\" (UID: \"9737f178-41ad-4deb-9d13-4245d6a31868\") " pod="metallb-system/speaker-p9g7l" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.698849 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6d53fc3-a79e-4249-86ab-e7588111b6ba-metrics-certs\") pod \"controller-6c7b4b5f48-89x9s\" (UID: \"d6d53fc3-a79e-4249-86ab-e7588111b6ba\") " pod="metallb-system/controller-6c7b4b5f48-89x9s" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.851086 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.869310 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-266dw" Nov 24 11:32:34 crc kubenswrapper[4678]: I1124 11:32:34.983164 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-89x9s" Nov 24 11:32:35 crc kubenswrapper[4678]: I1124 11:32:35.188583 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-266dw"] Nov 24 11:32:35 crc kubenswrapper[4678]: I1124 11:32:35.322251 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-89x9s"] Nov 24 11:32:35 crc kubenswrapper[4678]: I1124 11:32:35.727653 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9737f178-41ad-4deb-9d13-4245d6a31868-memberlist\") pod \"speaker-p9g7l\" (UID: \"9737f178-41ad-4deb-9d13-4245d6a31868\") " pod="metallb-system/speaker-p9g7l" Nov 24 11:32:35 crc kubenswrapper[4678]: I1124 11:32:35.736485 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9737f178-41ad-4deb-9d13-4245d6a31868-memberlist\") pod \"speaker-p9g7l\" (UID: \"9737f178-41ad-4deb-9d13-4245d6a31868\") " pod="metallb-system/speaker-p9g7l" Nov 24 11:32:35 crc kubenswrapper[4678]: I1124 11:32:35.848392 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-p9g7l" Nov 24 11:32:35 crc kubenswrapper[4678]: W1124 11:32:35.890076 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9737f178_41ad_4deb_9d13_4245d6a31868.slice/crio-282d3070472142c13735072403ae46c913b2330e81e3568e7512c17131319a87 WatchSource:0}: Error finding container 282d3070472142c13735072403ae46c913b2330e81e3568e7512c17131319a87: Status 404 returned error can't find the container with id 282d3070472142c13735072403ae46c913b2330e81e3568e7512c17131319a87 Nov 24 11:32:35 crc kubenswrapper[4678]: I1124 11:32:35.944315 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-89x9s" event={"ID":"d6d53fc3-a79e-4249-86ab-e7588111b6ba","Type":"ContainerStarted","Data":"ec5851ba5e0a3ab7a4194d540f810aa21b11734da76f553817144ad9d47fe4fa"} Nov 24 11:32:35 crc kubenswrapper[4678]: I1124 11:32:35.944368 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-89x9s" event={"ID":"d6d53fc3-a79e-4249-86ab-e7588111b6ba","Type":"ContainerStarted","Data":"799d50a79c00f17db1b108f95f504d2c9be5520e2b458dcd2c9bd8f4be66b660"} Nov 24 11:32:35 crc kubenswrapper[4678]: I1124 11:32:35.944379 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-89x9s" event={"ID":"d6d53fc3-a79e-4249-86ab-e7588111b6ba","Type":"ContainerStarted","Data":"e1f78add154226908c920f9de6466443cf4554a266436e6167bed8348a238287"} Nov 24 11:32:35 crc kubenswrapper[4678]: I1124 11:32:35.944410 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-89x9s" Nov 24 11:32:35 crc kubenswrapper[4678]: I1124 11:32:35.950682 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-p9g7l" event={"ID":"9737f178-41ad-4deb-9d13-4245d6a31868","Type":"ContainerStarted","Data":"282d3070472142c13735072403ae46c913b2330e81e3568e7512c17131319a87"} Nov 24 11:32:35 crc kubenswrapper[4678]: I1124 11:32:35.951413 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mmwxw" event={"ID":"763766a9-0307-4ba2-8545-26a817b1f410","Type":"ContainerStarted","Data":"77eb986038b9113bb3e0f902c8d40ad9e800ff73ee8deefdec43d0d13a3f7c77"} Nov 24 11:32:35 crc kubenswrapper[4678]: I1124 11:32:35.952063 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-266dw" event={"ID":"205823f2-053a-4c0b-9e24-debc45170c30","Type":"ContainerStarted","Data":"8c6839d97920575e89af61292fa676cc5e800444dc51683476ed0d5f89ce3e49"} Nov 24 11:32:36 crc kubenswrapper[4678]: I1124 11:32:36.978126 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-p9g7l" event={"ID":"9737f178-41ad-4deb-9d13-4245d6a31868","Type":"ContainerStarted","Data":"65f61cbc4cdb9ef936f391ebd7d27daa42ba74fee90026606eeab3e09a50e5f1"} Nov 24 11:32:36 crc kubenswrapper[4678]: I1124 11:32:36.978555 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-p9g7l" event={"ID":"9737f178-41ad-4deb-9d13-4245d6a31868","Type":"ContainerStarted","Data":"f7672cefe893ada02f0aeff26abf6ec454c383f4eccc2f3d8251d6d314171161"} Nov 24 11:32:36 crc kubenswrapper[4678]: I1124 11:32:36.978572 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-p9g7l" Nov 24 11:32:37 crc kubenswrapper[4678]: I1124 11:32:37.012997 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-p9g7l" podStartSLOduration=3.012965727 podStartE2EDuration="3.012965727s" podCreationTimestamp="2025-11-24 11:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:37.004503421 +0000 UTC m=+967.935563060" watchObservedRunningTime="2025-11-24 11:32:37.012965727 +0000 UTC m=+967.944025366" Nov 24 11:32:37 crc kubenswrapper[4678]: I1124 11:32:37.013923 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-89x9s" podStartSLOduration=3.013915463 podStartE2EDuration="3.013915463s" podCreationTimestamp="2025-11-24 11:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:32:35.973931656 +0000 UTC m=+966.904991295" watchObservedRunningTime="2025-11-24 11:32:37.013915463 +0000 UTC m=+967.944975102" Nov 24 11:32:43 crc kubenswrapper[4678]: I1124 11:32:43.034692 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-266dw" event={"ID":"205823f2-053a-4c0b-9e24-debc45170c30","Type":"ContainerStarted","Data":"53be3a260b78253fd4494b43dacb6537adbe9eb79edd47c10a6b1fd3c18fde86"} Nov 24 11:32:43 crc kubenswrapper[4678]: I1124 11:32:43.035348 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-266dw" Nov 24 11:32:43 crc kubenswrapper[4678]: I1124 11:32:43.038289 4678 generic.go:334] "Generic (PLEG): container finished" podID="763766a9-0307-4ba2-8545-26a817b1f410" containerID="eaad135a5fc1153cbda740a447e5f5a6d4fdc476e9570b7e42c9038744c08576" exitCode=0 Nov 24 11:32:43 crc kubenswrapper[4678]: I1124 11:32:43.038324 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mmwxw" event={"ID":"763766a9-0307-4ba2-8545-26a817b1f410","Type":"ContainerDied","Data":"eaad135a5fc1153cbda740a447e5f5a6d4fdc476e9570b7e42c9038744c08576"} Nov 24 11:32:43 crc kubenswrapper[4678]: I1124 11:32:43.050177 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-266dw" podStartSLOduration=3.155970679 podStartE2EDuration="10.05015224s" podCreationTimestamp="2025-11-24 11:32:33 +0000 UTC" firstStartedPulling="2025-11-24 11:32:35.238227658 +0000 UTC m=+966.169287297" lastFinishedPulling="2025-11-24 11:32:42.132409199 +0000 UTC m=+973.063468858" observedRunningTime="2025-11-24 11:32:43.049369948 +0000 UTC m=+973.980429587" watchObservedRunningTime="2025-11-24 11:32:43.05015224 +0000 UTC m=+973.981211909" Nov 24 11:32:44 crc kubenswrapper[4678]: I1124 11:32:44.047569 4678 generic.go:334] "Generic (PLEG): container finished" podID="763766a9-0307-4ba2-8545-26a817b1f410" containerID="24e99767ae27263f8f08752b73c84b664f599c5e45b694391938e309bbdf4bc1" exitCode=0 Nov 24 11:32:44 crc kubenswrapper[4678]: I1124 11:32:44.047629 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mmwxw" event={"ID":"763766a9-0307-4ba2-8545-26a817b1f410","Type":"ContainerDied","Data":"24e99767ae27263f8f08752b73c84b664f599c5e45b694391938e309bbdf4bc1"} Nov 24 11:32:45 crc kubenswrapper[4678]: I1124 11:32:45.056743 4678 generic.go:334] "Generic (PLEG): container finished" podID="763766a9-0307-4ba2-8545-26a817b1f410" containerID="9dd07336836ec9443048f4fdf020c310750bce23e8e6e8b6cd2e4b5a6106f943" exitCode=0 Nov 24 11:32:45 crc kubenswrapper[4678]: I1124 11:32:45.056792 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mmwxw" event={"ID":"763766a9-0307-4ba2-8545-26a817b1f410","Type":"ContainerDied","Data":"9dd07336836ec9443048f4fdf020c310750bce23e8e6e8b6cd2e4b5a6106f943"} Nov 24 11:32:46 crc kubenswrapper[4678]: I1124 11:32:46.069131 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mmwxw" event={"ID":"763766a9-0307-4ba2-8545-26a817b1f410","Type":"ContainerStarted","Data":"677bbdd75ebebbaeec21dabc6ceebaf4c034b73b770b73c8232b3c40277b9e86"} Nov 24 11:32:46 crc kubenswrapper[4678]: I1124 11:32:46.069558 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mmwxw" event={"ID":"763766a9-0307-4ba2-8545-26a817b1f410","Type":"ContainerStarted","Data":"032a93c429676cc4f49260b11d8d079bead33d359c2081975512237e3f25b255"} Nov 24 11:32:46 crc kubenswrapper[4678]: I1124 11:32:46.069571 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mmwxw" event={"ID":"763766a9-0307-4ba2-8545-26a817b1f410","Type":"ContainerStarted","Data":"c354ba96ebfc1b414944c113e1960743c03e3611604e449c5732a5f6d693fb20"} Nov 24 11:32:46 crc kubenswrapper[4678]: I1124 11:32:46.069580 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mmwxw" event={"ID":"763766a9-0307-4ba2-8545-26a817b1f410","Type":"ContainerStarted","Data":"18cc1a61eda26ab9ac4cc101b9fa4072ce323f7233df6b257c68cfc707aa5fda"} Nov 24 11:32:46 crc kubenswrapper[4678]: I1124 11:32:46.069589 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mmwxw" event={"ID":"763766a9-0307-4ba2-8545-26a817b1f410","Type":"ContainerStarted","Data":"7f8a71137d75dbd3c3e2fee93dabc1b47b2cfd825ebf162512687da9165f3f71"} Nov 24 11:32:47 crc kubenswrapper[4678]: I1124 11:32:47.089812 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mmwxw" event={"ID":"763766a9-0307-4ba2-8545-26a817b1f410","Type":"ContainerStarted","Data":"e2d0773a3e3ba63745608036c087eb3000abcac33b926a869898dd5699a3ff76"} Nov 24 11:32:47 crc kubenswrapper[4678]: I1124 11:32:47.090055 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:47 crc kubenswrapper[4678]: I1124 11:32:47.125697 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-mmwxw" podStartSLOduration=7.070275963 podStartE2EDuration="14.12566205s" podCreationTimestamp="2025-11-24 11:32:33 +0000 UTC" firstStartedPulling="2025-11-24 11:32:35.041855902 +0000 UTC m=+965.972915541" lastFinishedPulling="2025-11-24 11:32:42.097241989 +0000 UTC m=+973.028301628" observedRunningTime="2025-11-24 11:32:47.125109474 +0000 UTC m=+978.056169153" watchObservedRunningTime="2025-11-24 11:32:47.12566205 +0000 UTC m=+978.056721689" Nov 24 11:32:49 crc kubenswrapper[4678]: I1124 11:32:49.852282 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:49 crc kubenswrapper[4678]: I1124 11:32:49.921957 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:32:54 crc kubenswrapper[4678]: I1124 11:32:54.882973 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-266dw" Nov 24 11:32:54 crc kubenswrapper[4678]: I1124 11:32:54.987838 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-89x9s" Nov 24 11:32:55 crc kubenswrapper[4678]: I1124 11:32:55.854513 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-p9g7l" Nov 24 11:32:59 crc kubenswrapper[4678]: I1124 11:32:59.058988 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-x76k5"] Nov 24 11:32:59 crc kubenswrapper[4678]: I1124 11:32:59.063018 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x76k5" Nov 24 11:32:59 crc kubenswrapper[4678]: I1124 11:32:59.065394 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-s55pv" Nov 24 11:32:59 crc kubenswrapper[4678]: I1124 11:32:59.067141 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 24 11:32:59 crc kubenswrapper[4678]: I1124 11:32:59.067559 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 24 11:32:59 crc kubenswrapper[4678]: I1124 11:32:59.086218 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxdh6\" (UniqueName: \"kubernetes.io/projected/b0662789-8a52-4825-b731-6875cb2f3d41-kube-api-access-sxdh6\") pod \"openstack-operator-index-x76k5\" (UID: \"b0662789-8a52-4825-b731-6875cb2f3d41\") " pod="openstack-operators/openstack-operator-index-x76k5" Nov 24 11:32:59 crc kubenswrapper[4678]: I1124 11:32:59.101829 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x76k5"] Nov 24 11:32:59 crc kubenswrapper[4678]: I1124 11:32:59.188446 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxdh6\" (UniqueName: \"kubernetes.io/projected/b0662789-8a52-4825-b731-6875cb2f3d41-kube-api-access-sxdh6\") pod \"openstack-operator-index-x76k5\" (UID: \"b0662789-8a52-4825-b731-6875cb2f3d41\") " pod="openstack-operators/openstack-operator-index-x76k5" Nov 24 11:32:59 crc kubenswrapper[4678]: I1124 11:32:59.208709 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxdh6\" (UniqueName: \"kubernetes.io/projected/b0662789-8a52-4825-b731-6875cb2f3d41-kube-api-access-sxdh6\") pod \"openstack-operator-index-x76k5\" (UID: \"b0662789-8a52-4825-b731-6875cb2f3d41\") " pod="openstack-operators/openstack-operator-index-x76k5" Nov 24 11:32:59 crc kubenswrapper[4678]: I1124 11:32:59.388258 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x76k5" Nov 24 11:32:59 crc kubenswrapper[4678]: I1124 11:32:59.822025 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x76k5"] Nov 24 11:33:00 crc kubenswrapper[4678]: I1124 11:33:00.204026 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x76k5" event={"ID":"b0662789-8a52-4825-b731-6875cb2f3d41","Type":"ContainerStarted","Data":"62ba1f65d9e14ad2f6c8e3bd96d6903fefdf8fcd74b31011049c1c5973689d04"} Nov 24 11:33:01 crc kubenswrapper[4678]: I1124 11:33:01.834496 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-x76k5"] Nov 24 11:33:02 crc kubenswrapper[4678]: I1124 11:33:02.438882 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gwlvt"] Nov 24 11:33:02 crc kubenswrapper[4678]: I1124 11:33:02.440012 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gwlvt" Nov 24 11:33:02 crc kubenswrapper[4678]: I1124 11:33:02.459312 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gwlvt"] Nov 24 11:33:02 crc kubenswrapper[4678]: I1124 11:33:02.558641 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78298\" (UniqueName: \"kubernetes.io/projected/0d8e008b-c58e-4697-bbb3-5b2c6def254f-kube-api-access-78298\") pod \"openstack-operator-index-gwlvt\" (UID: \"0d8e008b-c58e-4697-bbb3-5b2c6def254f\") " pod="openstack-operators/openstack-operator-index-gwlvt" Nov 24 11:33:02 crc kubenswrapper[4678]: I1124 11:33:02.660272 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78298\" (UniqueName: \"kubernetes.io/projected/0d8e008b-c58e-4697-bbb3-5b2c6def254f-kube-api-access-78298\") pod \"openstack-operator-index-gwlvt\" (UID: \"0d8e008b-c58e-4697-bbb3-5b2c6def254f\") " pod="openstack-operators/openstack-operator-index-gwlvt" Nov 24 11:33:02 crc kubenswrapper[4678]: I1124 11:33:02.682503 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78298\" (UniqueName: \"kubernetes.io/projected/0d8e008b-c58e-4697-bbb3-5b2c6def254f-kube-api-access-78298\") pod \"openstack-operator-index-gwlvt\" (UID: \"0d8e008b-c58e-4697-bbb3-5b2c6def254f\") " pod="openstack-operators/openstack-operator-index-gwlvt" Nov 24 11:33:02 crc kubenswrapper[4678]: I1124 11:33:02.770915 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gwlvt" Nov 24 11:33:03 crc kubenswrapper[4678]: I1124 11:33:03.236003 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x76k5" event={"ID":"b0662789-8a52-4825-b731-6875cb2f3d41","Type":"ContainerStarted","Data":"f9e3f7f90dd96d6736114e4911987013874bdfcd5f2811acd4865401257013b0"} Nov 24 11:33:03 crc kubenswrapper[4678]: I1124 11:33:03.236138 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-x76k5" podUID="b0662789-8a52-4825-b731-6875cb2f3d41" containerName="registry-server" containerID="cri-o://f9e3f7f90dd96d6736114e4911987013874bdfcd5f2811acd4865401257013b0" gracePeriod=2 Nov 24 11:33:03 crc kubenswrapper[4678]: I1124 11:33:03.243186 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gwlvt"] Nov 24 11:33:03 crc kubenswrapper[4678]: I1124 11:33:03.257933 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-x76k5" podStartSLOduration=1.9749207690000001 podStartE2EDuration="4.257908364s" podCreationTimestamp="2025-11-24 11:32:59 +0000 UTC" firstStartedPulling="2025-11-24 11:32:59.83374443 +0000 UTC m=+990.764804059" lastFinishedPulling="2025-11-24 11:33:02.116732015 +0000 UTC m=+993.047791654" observedRunningTime="2025-11-24 11:33:03.256492876 +0000 UTC m=+994.187552515" watchObservedRunningTime="2025-11-24 11:33:03.257908364 +0000 UTC m=+994.188968013" Nov 24 11:33:03 crc kubenswrapper[4678]: I1124 11:33:03.717190 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x76k5" Nov 24 11:33:03 crc kubenswrapper[4678]: I1124 11:33:03.897451 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxdh6\" (UniqueName: \"kubernetes.io/projected/b0662789-8a52-4825-b731-6875cb2f3d41-kube-api-access-sxdh6\") pod \"b0662789-8a52-4825-b731-6875cb2f3d41\" (UID: \"b0662789-8a52-4825-b731-6875cb2f3d41\") " Nov 24 11:33:03 crc kubenswrapper[4678]: I1124 11:33:03.908373 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0662789-8a52-4825-b731-6875cb2f3d41-kube-api-access-sxdh6" (OuterVolumeSpecName: "kube-api-access-sxdh6") pod "b0662789-8a52-4825-b731-6875cb2f3d41" (UID: "b0662789-8a52-4825-b731-6875cb2f3d41"). InnerVolumeSpecName "kube-api-access-sxdh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:33:04 crc kubenswrapper[4678]: I1124 11:33:04.003521 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxdh6\" (UniqueName: \"kubernetes.io/projected/b0662789-8a52-4825-b731-6875cb2f3d41-kube-api-access-sxdh6\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:04 crc kubenswrapper[4678]: I1124 11:33:04.251860 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gwlvt" event={"ID":"0d8e008b-c58e-4697-bbb3-5b2c6def254f","Type":"ContainerStarted","Data":"382edc3f3b670659b063f9ad5b266a04e2b590597271f78283a25a3cc4c01e6b"} Nov 24 11:33:04 crc kubenswrapper[4678]: I1124 11:33:04.251918 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gwlvt" event={"ID":"0d8e008b-c58e-4697-bbb3-5b2c6def254f","Type":"ContainerStarted","Data":"268d55b3cb5eb5754e8408bdf3e86c0a7c1aa5122c845f2eacf2ef710b6474cc"} Nov 24 11:33:04 crc kubenswrapper[4678]: I1124 11:33:04.255383 4678 generic.go:334] "Generic (PLEG): container finished" podID="b0662789-8a52-4825-b731-6875cb2f3d41" containerID="f9e3f7f90dd96d6736114e4911987013874bdfcd5f2811acd4865401257013b0" exitCode=0 Nov 24 11:33:04 crc kubenswrapper[4678]: I1124 11:33:04.255463 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x76k5" event={"ID":"b0662789-8a52-4825-b731-6875cb2f3d41","Type":"ContainerDied","Data":"f9e3f7f90dd96d6736114e4911987013874bdfcd5f2811acd4865401257013b0"} Nov 24 11:33:04 crc kubenswrapper[4678]: I1124 11:33:04.255512 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x76k5" event={"ID":"b0662789-8a52-4825-b731-6875cb2f3d41","Type":"ContainerDied","Data":"62ba1f65d9e14ad2f6c8e3bd96d6903fefdf8fcd74b31011049c1c5973689d04"} Nov 24 11:33:04 crc kubenswrapper[4678]: I1124 11:33:04.255549 4678 scope.go:117] "RemoveContainer" containerID="f9e3f7f90dd96d6736114e4911987013874bdfcd5f2811acd4865401257013b0" Nov 24 11:33:04 crc kubenswrapper[4678]: I1124 11:33:04.255585 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x76k5" Nov 24 11:33:04 crc kubenswrapper[4678]: I1124 11:33:04.284847 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gwlvt" podStartSLOduration=2.234766665 podStartE2EDuration="2.28442288s" podCreationTimestamp="2025-11-24 11:33:02 +0000 UTC" firstStartedPulling="2025-11-24 11:33:03.249539681 +0000 UTC m=+994.180599320" lastFinishedPulling="2025-11-24 11:33:03.299195896 +0000 UTC m=+994.230255535" observedRunningTime="2025-11-24 11:33:04.273543799 +0000 UTC m=+995.204603448" watchObservedRunningTime="2025-11-24 11:33:04.28442288 +0000 UTC m=+995.215482579" Nov 24 11:33:04 crc kubenswrapper[4678]: I1124 11:33:04.297200 4678 scope.go:117] "RemoveContainer" containerID="f9e3f7f90dd96d6736114e4911987013874bdfcd5f2811acd4865401257013b0" Nov 24 11:33:04 crc kubenswrapper[4678]: E1124 11:33:04.297847 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9e3f7f90dd96d6736114e4911987013874bdfcd5f2811acd4865401257013b0\": container with ID starting with f9e3f7f90dd96d6736114e4911987013874bdfcd5f2811acd4865401257013b0 not found: ID does not exist" containerID="f9e3f7f90dd96d6736114e4911987013874bdfcd5f2811acd4865401257013b0" Nov 24 11:33:04 crc kubenswrapper[4678]: I1124 11:33:04.298008 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9e3f7f90dd96d6736114e4911987013874bdfcd5f2811acd4865401257013b0"} err="failed to get container status \"f9e3f7f90dd96d6736114e4911987013874bdfcd5f2811acd4865401257013b0\": rpc error: code = NotFound desc = could not find container \"f9e3f7f90dd96d6736114e4911987013874bdfcd5f2811acd4865401257013b0\": container with ID starting with f9e3f7f90dd96d6736114e4911987013874bdfcd5f2811acd4865401257013b0 not found: ID does not exist" Nov 24 11:33:04 crc kubenswrapper[4678]: I1124 11:33:04.300732 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-x76k5"] Nov 24 11:33:04 crc kubenswrapper[4678]: I1124 11:33:04.308637 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-x76k5"] Nov 24 11:33:04 crc kubenswrapper[4678]: I1124 11:33:04.855975 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-mmwxw" Nov 24 11:33:05 crc kubenswrapper[4678]: I1124 11:33:05.907864 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0662789-8a52-4825-b731-6875cb2f3d41" path="/var/lib/kubelet/pods/b0662789-8a52-4825-b731-6875cb2f3d41/volumes" Nov 24 11:33:12 crc kubenswrapper[4678]: I1124 11:33:12.772603 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-gwlvt" Nov 24 11:33:12 crc kubenswrapper[4678]: I1124 11:33:12.773542 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-gwlvt" Nov 24 11:33:12 crc kubenswrapper[4678]: I1124 11:33:12.807495 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-gwlvt" Nov 24 11:33:13 crc kubenswrapper[4678]: I1124 11:33:13.352323 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-gwlvt" Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.104519 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb"] Nov 24 11:33:19 crc kubenswrapper[4678]: E1124 11:33:19.105688 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0662789-8a52-4825-b731-6875cb2f3d41" containerName="registry-server" Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.105707 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0662789-8a52-4825-b731-6875cb2f3d41" containerName="registry-server" Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.105952 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0662789-8a52-4825-b731-6875cb2f3d41" containerName="registry-server" Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.107285 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.109515 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-t4jkl" Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.112297 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb"] Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.208787 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37b5d808-3ae5-47a2-95d5-fb22a1e073de-util\") pod \"8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb\" (UID: \"37b5d808-3ae5-47a2-95d5-fb22a1e073de\") " pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.208930 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69cx6\" (UniqueName: \"kubernetes.io/projected/37b5d808-3ae5-47a2-95d5-fb22a1e073de-kube-api-access-69cx6\") pod \"8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb\" (UID: \"37b5d808-3ae5-47a2-95d5-fb22a1e073de\") " pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.208994 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37b5d808-3ae5-47a2-95d5-fb22a1e073de-bundle\") pod \"8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb\" (UID: \"37b5d808-3ae5-47a2-95d5-fb22a1e073de\") " pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.310153 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69cx6\" (UniqueName: \"kubernetes.io/projected/37b5d808-3ae5-47a2-95d5-fb22a1e073de-kube-api-access-69cx6\") pod \"8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb\" (UID: \"37b5d808-3ae5-47a2-95d5-fb22a1e073de\") " pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.310241 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37b5d808-3ae5-47a2-95d5-fb22a1e073de-bundle\") pod \"8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb\" (UID: \"37b5d808-3ae5-47a2-95d5-fb22a1e073de\") " pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.310324 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37b5d808-3ae5-47a2-95d5-fb22a1e073de-util\") pod \"8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb\" (UID: \"37b5d808-3ae5-47a2-95d5-fb22a1e073de\") " pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.310793 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37b5d808-3ae5-47a2-95d5-fb22a1e073de-bundle\") pod \"8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb\" (UID: \"37b5d808-3ae5-47a2-95d5-fb22a1e073de\") " pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.310932 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37b5d808-3ae5-47a2-95d5-fb22a1e073de-util\") pod \"8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb\" (UID: \"37b5d808-3ae5-47a2-95d5-fb22a1e073de\") " pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.334393 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69cx6\" (UniqueName: \"kubernetes.io/projected/37b5d808-3ae5-47a2-95d5-fb22a1e073de-kube-api-access-69cx6\") pod \"8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb\" (UID: \"37b5d808-3ae5-47a2-95d5-fb22a1e073de\") " pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.427832 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" Nov 24 11:33:19 crc kubenswrapper[4678]: I1124 11:33:19.870404 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb"] Nov 24 11:33:20 crc kubenswrapper[4678]: I1124 11:33:20.384861 4678 generic.go:334] "Generic (PLEG): container finished" podID="37b5d808-3ae5-47a2-95d5-fb22a1e073de" containerID="0ebca30c400ea488db7d0511f8344f4259995c7bed273075bcb5626e7b46e26f" exitCode=0 Nov 24 11:33:20 crc kubenswrapper[4678]: I1124 11:33:20.384935 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" event={"ID":"37b5d808-3ae5-47a2-95d5-fb22a1e073de","Type":"ContainerDied","Data":"0ebca30c400ea488db7d0511f8344f4259995c7bed273075bcb5626e7b46e26f"} Nov 24 11:33:20 crc kubenswrapper[4678]: I1124 11:33:20.385382 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" event={"ID":"37b5d808-3ae5-47a2-95d5-fb22a1e073de","Type":"ContainerStarted","Data":"440a26189551423539dcae71008ae6b606d60b609e500ecbd78d7f8375377b2a"} Nov 24 11:33:21 crc kubenswrapper[4678]: I1124 11:33:21.399656 4678 generic.go:334] "Generic (PLEG): container finished" podID="37b5d808-3ae5-47a2-95d5-fb22a1e073de" containerID="1f79388a3f67c8d4bdfafc03f4a5de975f4e39bd15041f1e75a20b80b4224ef5" exitCode=0 Nov 24 11:33:21 crc kubenswrapper[4678]: I1124 11:33:21.399787 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" event={"ID":"37b5d808-3ae5-47a2-95d5-fb22a1e073de","Type":"ContainerDied","Data":"1f79388a3f67c8d4bdfafc03f4a5de975f4e39bd15041f1e75a20b80b4224ef5"} Nov 24 11:33:22 crc kubenswrapper[4678]: I1124 11:33:22.413846 4678 generic.go:334] "Generic (PLEG): container finished" podID="37b5d808-3ae5-47a2-95d5-fb22a1e073de" containerID="6dcc109281820c5a2ffbc445ae1e3805a0a411c136e6f9425b33b1c4d650d9c8" exitCode=0 Nov 24 11:33:22 crc kubenswrapper[4678]: I1124 11:33:22.414079 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" event={"ID":"37b5d808-3ae5-47a2-95d5-fb22a1e073de","Type":"ContainerDied","Data":"6dcc109281820c5a2ffbc445ae1e3805a0a411c136e6f9425b33b1c4d650d9c8"} Nov 24 11:33:23 crc kubenswrapper[4678]: I1124 11:33:23.816778 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" Nov 24 11:33:23 crc kubenswrapper[4678]: I1124 11:33:23.902156 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37b5d808-3ae5-47a2-95d5-fb22a1e073de-bundle\") pod \"37b5d808-3ae5-47a2-95d5-fb22a1e073de\" (UID: \"37b5d808-3ae5-47a2-95d5-fb22a1e073de\") " Nov 24 11:33:23 crc kubenswrapper[4678]: I1124 11:33:23.902443 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37b5d808-3ae5-47a2-95d5-fb22a1e073de-util\") pod \"37b5d808-3ae5-47a2-95d5-fb22a1e073de\" (UID: \"37b5d808-3ae5-47a2-95d5-fb22a1e073de\") " Nov 24 11:33:23 crc kubenswrapper[4678]: I1124 11:33:23.902555 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69cx6\" (UniqueName: \"kubernetes.io/projected/37b5d808-3ae5-47a2-95d5-fb22a1e073de-kube-api-access-69cx6\") pod \"37b5d808-3ae5-47a2-95d5-fb22a1e073de\" (UID: \"37b5d808-3ae5-47a2-95d5-fb22a1e073de\") " Nov 24 11:33:23 crc kubenswrapper[4678]: I1124 11:33:23.903394 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37b5d808-3ae5-47a2-95d5-fb22a1e073de-bundle" (OuterVolumeSpecName: "bundle") pod "37b5d808-3ae5-47a2-95d5-fb22a1e073de" (UID: "37b5d808-3ae5-47a2-95d5-fb22a1e073de"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:33:23 crc kubenswrapper[4678]: I1124 11:33:23.912004 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b5d808-3ae5-47a2-95d5-fb22a1e073de-kube-api-access-69cx6" (OuterVolumeSpecName: "kube-api-access-69cx6") pod "37b5d808-3ae5-47a2-95d5-fb22a1e073de" (UID: "37b5d808-3ae5-47a2-95d5-fb22a1e073de"). InnerVolumeSpecName "kube-api-access-69cx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:33:23 crc kubenswrapper[4678]: I1124 11:33:23.924134 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37b5d808-3ae5-47a2-95d5-fb22a1e073de-util" (OuterVolumeSpecName: "util") pod "37b5d808-3ae5-47a2-95d5-fb22a1e073de" (UID: "37b5d808-3ae5-47a2-95d5-fb22a1e073de"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:33:24 crc kubenswrapper[4678]: I1124 11:33:24.004703 4678 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37b5d808-3ae5-47a2-95d5-fb22a1e073de-util\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:24 crc kubenswrapper[4678]: I1124 11:33:24.005005 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69cx6\" (UniqueName: \"kubernetes.io/projected/37b5d808-3ae5-47a2-95d5-fb22a1e073de-kube-api-access-69cx6\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:24 crc kubenswrapper[4678]: I1124 11:33:24.005081 4678 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37b5d808-3ae5-47a2-95d5-fb22a1e073de-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:33:24 crc kubenswrapper[4678]: I1124 11:33:24.437164 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" event={"ID":"37b5d808-3ae5-47a2-95d5-fb22a1e073de","Type":"ContainerDied","Data":"440a26189551423539dcae71008ae6b606d60b609e500ecbd78d7f8375377b2a"} Nov 24 11:33:24 crc kubenswrapper[4678]: I1124 11:33:24.437209 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="440a26189551423539dcae71008ae6b606d60b609e500ecbd78d7f8375377b2a" Nov 24 11:33:24 crc kubenswrapper[4678]: I1124 11:33:24.437248 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb" Nov 24 11:33:32 crc kubenswrapper[4678]: I1124 11:33:32.217966 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-9f56d7bd5-p4btp"] Nov 24 11:33:32 crc kubenswrapper[4678]: E1124 11:33:32.219198 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b5d808-3ae5-47a2-95d5-fb22a1e073de" containerName="extract" Nov 24 11:33:32 crc kubenswrapper[4678]: I1124 11:33:32.219220 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b5d808-3ae5-47a2-95d5-fb22a1e073de" containerName="extract" Nov 24 11:33:32 crc kubenswrapper[4678]: E1124 11:33:32.219255 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b5d808-3ae5-47a2-95d5-fb22a1e073de" containerName="util" Nov 24 11:33:32 crc kubenswrapper[4678]: I1124 11:33:32.219263 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b5d808-3ae5-47a2-95d5-fb22a1e073de" containerName="util" Nov 24 11:33:32 crc kubenswrapper[4678]: E1124 11:33:32.219284 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b5d808-3ae5-47a2-95d5-fb22a1e073de" containerName="pull" Nov 24 11:33:32 crc kubenswrapper[4678]: I1124 11:33:32.219292 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b5d808-3ae5-47a2-95d5-fb22a1e073de" containerName="pull" Nov 24 11:33:32 crc kubenswrapper[4678]: I1124 11:33:32.219524 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b5d808-3ae5-47a2-95d5-fb22a1e073de" containerName="extract" Nov 24 11:33:32 crc kubenswrapper[4678]: I1124 11:33:32.220721 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-9f56d7bd5-p4btp" Nov 24 11:33:32 crc kubenswrapper[4678]: I1124 11:33:32.232723 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-5b8cw" Nov 24 11:33:32 crc kubenswrapper[4678]: I1124 11:33:32.248606 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-9f56d7bd5-p4btp"] Nov 24 11:33:32 crc kubenswrapper[4678]: I1124 11:33:32.293491 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t56n\" (UniqueName: \"kubernetes.io/projected/e6986d07-7f65-41b6-bde9-a0d486e290dc-kube-api-access-8t56n\") pod \"openstack-operator-controller-operator-9f56d7bd5-p4btp\" (UID: \"e6986d07-7f65-41b6-bde9-a0d486e290dc\") " pod="openstack-operators/openstack-operator-controller-operator-9f56d7bd5-p4btp" Nov 24 11:33:32 crc kubenswrapper[4678]: I1124 11:33:32.396003 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t56n\" (UniqueName: \"kubernetes.io/projected/e6986d07-7f65-41b6-bde9-a0d486e290dc-kube-api-access-8t56n\") pod \"openstack-operator-controller-operator-9f56d7bd5-p4btp\" (UID: \"e6986d07-7f65-41b6-bde9-a0d486e290dc\") " pod="openstack-operators/openstack-operator-controller-operator-9f56d7bd5-p4btp" Nov 24 11:33:32 crc kubenswrapper[4678]: I1124 11:33:32.436023 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t56n\" (UniqueName: \"kubernetes.io/projected/e6986d07-7f65-41b6-bde9-a0d486e290dc-kube-api-access-8t56n\") pod \"openstack-operator-controller-operator-9f56d7bd5-p4btp\" (UID: \"e6986d07-7f65-41b6-bde9-a0d486e290dc\") " pod="openstack-operators/openstack-operator-controller-operator-9f56d7bd5-p4btp" Nov 24 11:33:32 crc kubenswrapper[4678]: I1124 11:33:32.542841 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-9f56d7bd5-p4btp" Nov 24 11:33:33 crc kubenswrapper[4678]: I1124 11:33:33.131810 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-9f56d7bd5-p4btp"] Nov 24 11:33:33 crc kubenswrapper[4678]: W1124 11:33:33.139553 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6986d07_7f65_41b6_bde9_a0d486e290dc.slice/crio-dda67adfa03e5b87b9a64507d898912724991a8bb6446b79a5d197679c7b0bf5 WatchSource:0}: Error finding container dda67adfa03e5b87b9a64507d898912724991a8bb6446b79a5d197679c7b0bf5: Status 404 returned error can't find the container with id dda67adfa03e5b87b9a64507d898912724991a8bb6446b79a5d197679c7b0bf5 Nov 24 11:33:33 crc kubenswrapper[4678]: I1124 11:33:33.529388 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-9f56d7bd5-p4btp" event={"ID":"e6986d07-7f65-41b6-bde9-a0d486e290dc","Type":"ContainerStarted","Data":"dda67adfa03e5b87b9a64507d898912724991a8bb6446b79a5d197679c7b0bf5"} Nov 24 11:33:38 crc kubenswrapper[4678]: I1124 11:33:38.574927 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-9f56d7bd5-p4btp" event={"ID":"e6986d07-7f65-41b6-bde9-a0d486e290dc","Type":"ContainerStarted","Data":"b55476524ee4452345786d262e3b18deaad0c03e1bac6f008b5e6f9164ad0176"} Nov 24 11:33:40 crc kubenswrapper[4678]: I1124 11:33:40.592501 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-9f56d7bd5-p4btp" event={"ID":"e6986d07-7f65-41b6-bde9-a0d486e290dc","Type":"ContainerStarted","Data":"de5359cfa65ba634bc008101fabd6034b5b48168ebc7a5b43928cf94fdc4efea"} Nov 24 11:33:40 crc kubenswrapper[4678]: I1124 11:33:40.594871 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-9f56d7bd5-p4btp" Nov 24 11:33:40 crc kubenswrapper[4678]: I1124 11:33:40.629615 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-9f56d7bd5-p4btp" podStartSLOduration=2.019150883 podStartE2EDuration="8.629594256s" podCreationTimestamp="2025-11-24 11:33:32 +0000 UTC" firstStartedPulling="2025-11-24 11:33:33.142444558 +0000 UTC m=+1024.073504197" lastFinishedPulling="2025-11-24 11:33:39.752887931 +0000 UTC m=+1030.683947570" observedRunningTime="2025-11-24 11:33:40.623523075 +0000 UTC m=+1031.554582724" watchObservedRunningTime="2025-11-24 11:33:40.629594256 +0000 UTC m=+1031.560653895" Nov 24 11:33:42 crc kubenswrapper[4678]: I1124 11:33:42.548651 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-9f56d7bd5-p4btp" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.297119 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.298025 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.637562 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j"] Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.638903 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.644959 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-ddhmr" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.650274 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-nxdjc"] Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.651831 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-nxdjc" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.661547 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-6fnpd" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.672710 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j"] Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.679754 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-nxdjc"] Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.688179 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-gmrd8"] Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.689769 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gmrd8" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.701289 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-zj6h2" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.746028 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdsdv\" (UniqueName: \"kubernetes.io/projected/1d845025-efc3-47c5-b640-59eeafc744a2-kube-api-access-bdsdv\") pod \"barbican-operator-controller-manager-75fb479bcc-xlx8j\" (UID: \"1d845025-efc3-47c5-b640-59eeafc744a2\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.746100 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhwmf\" (UniqueName: \"kubernetes.io/projected/e50daf7a-089a-48d0-883f-5db082bb6908-kube-api-access-fhwmf\") pod \"designate-operator-controller-manager-767ccfd65f-gmrd8\" (UID: \"e50daf7a-089a-48d0-883f-5db082bb6908\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gmrd8" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.746134 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vfcx\" (UniqueName: \"kubernetes.io/projected/7f7a3294-7af7-44cb-95b7-3214cda4de48-kube-api-access-7vfcx\") pod \"cinder-operator-controller-manager-6498cbf48f-nxdjc\" (UID: \"7f7a3294-7af7-44cb-95b7-3214cda4de48\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-nxdjc" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.763029 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-cxm7x"] Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.764402 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7969689c84-cxm7x" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.769003 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-grlvn" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.792439 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-gmrd8"] Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.798645 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-jjbs2"] Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.800222 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-jjbs2" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.814284 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-2lm48" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.815857 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-jk9k4"] Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.817202 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-jk9k4" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.822803 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-42dw6" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.833739 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj"] Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.835384 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.850330 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.850572 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jt8ct" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.853202 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdz9j\" (UniqueName: \"kubernetes.io/projected/e9db91a3-68e2-4500-ab6a-d1055c6e6dde-kube-api-access-qdz9j\") pod \"horizon-operator-controller-manager-598f69df5d-jk9k4\" (UID: \"e9db91a3-68e2-4500-ab6a-d1055c6e6dde\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-jk9k4" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.853333 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prbdm\" (UniqueName: \"kubernetes.io/projected/276b61c4-dec2-4f5e-a5bd-ac814c7d0fc5-kube-api-access-prbdm\") pod \"glance-operator-controller-manager-7969689c84-cxm7x\" (UID: \"276b61c4-dec2-4f5e-a5bd-ac814c7d0fc5\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-cxm7x" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.853390 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f65kj\" (UniqueName: \"kubernetes.io/projected/f98bea89-6852-42c9-a69b-9867fe021eb8-kube-api-access-f65kj\") pod \"heat-operator-controller-manager-56f54d6746-jjbs2\" (UID: \"f98bea89-6852-42c9-a69b-9867fe021eb8\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-jjbs2" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.853428 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdsdv\" (UniqueName: \"kubernetes.io/projected/1d845025-efc3-47c5-b640-59eeafc744a2-kube-api-access-bdsdv\") pod \"barbican-operator-controller-manager-75fb479bcc-xlx8j\" (UID: \"1d845025-efc3-47c5-b640-59eeafc744a2\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.853460 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhwmf\" (UniqueName: \"kubernetes.io/projected/e50daf7a-089a-48d0-883f-5db082bb6908-kube-api-access-fhwmf\") pod \"designate-operator-controller-manager-767ccfd65f-gmrd8\" (UID: \"e50daf7a-089a-48d0-883f-5db082bb6908\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gmrd8" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.853484 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vfcx\" (UniqueName: \"kubernetes.io/projected/7f7a3294-7af7-44cb-95b7-3214cda4de48-kube-api-access-7vfcx\") pod \"cinder-operator-controller-manager-6498cbf48f-nxdjc\" (UID: \"7f7a3294-7af7-44cb-95b7-3214cda4de48\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-nxdjc" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.904312 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-jjbs2"] Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.909484 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdsdv\" (UniqueName: \"kubernetes.io/projected/1d845025-efc3-47c5-b640-59eeafc744a2-kube-api-access-bdsdv\") pod \"barbican-operator-controller-manager-75fb479bcc-xlx8j\" (UID: \"1d845025-efc3-47c5-b640-59eeafc744a2\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.946568 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhwmf\" (UniqueName: \"kubernetes.io/projected/e50daf7a-089a-48d0-883f-5db082bb6908-kube-api-access-fhwmf\") pod \"designate-operator-controller-manager-767ccfd65f-gmrd8\" (UID: \"e50daf7a-089a-48d0-883f-5db082bb6908\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gmrd8" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.953421 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vfcx\" (UniqueName: \"kubernetes.io/projected/7f7a3294-7af7-44cb-95b7-3214cda4de48-kube-api-access-7vfcx\") pod \"cinder-operator-controller-manager-6498cbf48f-nxdjc\" (UID: \"7f7a3294-7af7-44cb-95b7-3214cda4de48\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-nxdjc" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.960436 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eecabfcc-62de-4512-b5e8-1685d7fd1144-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-r4sjj\" (UID: \"eecabfcc-62de-4512-b5e8-1685d7fd1144\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.962784 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.965372 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prbdm\" (UniqueName: \"kubernetes.io/projected/276b61c4-dec2-4f5e-a5bd-ac814c7d0fc5-kube-api-access-prbdm\") pod \"glance-operator-controller-manager-7969689c84-cxm7x\" (UID: \"276b61c4-dec2-4f5e-a5bd-ac814c7d0fc5\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-cxm7x" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.965539 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f957q\" (UniqueName: \"kubernetes.io/projected/eecabfcc-62de-4512-b5e8-1685d7fd1144-kube-api-access-f957q\") pod \"infra-operator-controller-manager-6dd8864d7c-r4sjj\" (UID: \"eecabfcc-62de-4512-b5e8-1685d7fd1144\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.965628 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f65kj\" (UniqueName: \"kubernetes.io/projected/f98bea89-6852-42c9-a69b-9867fe021eb8-kube-api-access-f65kj\") pod \"heat-operator-controller-manager-56f54d6746-jjbs2\" (UID: \"f98bea89-6852-42c9-a69b-9867fe021eb8\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-jjbs2" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.966007 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdz9j\" (UniqueName: \"kubernetes.io/projected/e9db91a3-68e2-4500-ab6a-d1055c6e6dde-kube-api-access-qdz9j\") pod \"horizon-operator-controller-manager-598f69df5d-jk9k4\" (UID: \"e9db91a3-68e2-4500-ab6a-d1055c6e6dde\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-jk9k4" Nov 24 11:34:00 crc kubenswrapper[4678]: I1124 11:34:00.981390 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-nxdjc" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.016836 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prbdm\" (UniqueName: \"kubernetes.io/projected/276b61c4-dec2-4f5e-a5bd-ac814c7d0fc5-kube-api-access-prbdm\") pod \"glance-operator-controller-manager-7969689c84-cxm7x\" (UID: \"276b61c4-dec2-4f5e-a5bd-ac814c7d0fc5\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-cxm7x" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.033627 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gmrd8" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.037429 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f65kj\" (UniqueName: \"kubernetes.io/projected/f98bea89-6852-42c9-a69b-9867fe021eb8-kube-api-access-f65kj\") pod \"heat-operator-controller-manager-56f54d6746-jjbs2\" (UID: \"f98bea89-6852-42c9-a69b-9867fe021eb8\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-jjbs2" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.038131 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdz9j\" (UniqueName: \"kubernetes.io/projected/e9db91a3-68e2-4500-ab6a-d1055c6e6dde-kube-api-access-qdz9j\") pod \"horizon-operator-controller-manager-598f69df5d-jk9k4\" (UID: \"e9db91a3-68e2-4500-ab6a-d1055c6e6dde\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-jk9k4" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.038186 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-jk9k4"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.082886 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eecabfcc-62de-4512-b5e8-1685d7fd1144-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-r4sjj\" (UID: \"eecabfcc-62de-4512-b5e8-1685d7fd1144\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.083244 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f957q\" (UniqueName: \"kubernetes.io/projected/eecabfcc-62de-4512-b5e8-1685d7fd1144-kube-api-access-f957q\") pod \"infra-operator-controller-manager-6dd8864d7c-r4sjj\" (UID: \"eecabfcc-62de-4512-b5e8-1685d7fd1144\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj" Nov 24 11:34:01 crc kubenswrapper[4678]: E1124 11:34:01.083921 4678 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 11:34:01 crc kubenswrapper[4678]: E1124 11:34:01.084046 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eecabfcc-62de-4512-b5e8-1685d7fd1144-cert podName:eecabfcc-62de-4512-b5e8-1685d7fd1144 nodeName:}" failed. No retries permitted until 2025-11-24 11:34:01.584027088 +0000 UTC m=+1052.515086717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/eecabfcc-62de-4512-b5e8-1685d7fd1144-cert") pod "infra-operator-controller-manager-6dd8864d7c-r4sjj" (UID: "eecabfcc-62de-4512-b5e8-1685d7fd1144") : secret "infra-operator-webhook-server-cert" not found Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.088745 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-cxm7x"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.113030 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.114069 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f957q\" (UniqueName: \"kubernetes.io/projected/eecabfcc-62de-4512-b5e8-1685d7fd1144-kube-api-access-f957q\") pod \"infra-operator-controller-manager-6dd8864d7c-r4sjj\" (UID: \"eecabfcc-62de-4512-b5e8-1685d7fd1144\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.131364 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7969689c84-cxm7x" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.148571 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-2h8fr"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.150563 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-2h8fr" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.151724 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-jjbs2" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.158623 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-q77cx"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.160541 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-q77cx" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.166319 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ddqwz" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.166507 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-bmp5c" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.176640 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-2h8fr"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.188381 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-jk9k4" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.198044 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-q77cx"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.249850 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-9zvz7"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.251813 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58f887965d-9zvz7" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.258260 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rt2tl" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.273857 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-vgv2l"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.277030 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-vgv2l" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.288852 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxcgn\" (UniqueName: \"kubernetes.io/projected/edbe0de9-67d0-49cc-a867-3483035e3c51-kube-api-access-qxcgn\") pod \"keystone-operator-controller-manager-7454b96578-2h8fr\" (UID: \"edbe0de9-67d0-49cc-a867-3483035e3c51\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-2h8fr" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.289108 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rggg9\" (UniqueName: \"kubernetes.io/projected/d2fab4cb-dff4-439e-a97b-b35b8a2203c6-kube-api-access-rggg9\") pod \"ironic-operator-controller-manager-99b499f4-q77cx\" (UID: \"d2fab4cb-dff4-439e-a97b-b35b8a2203c6\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-q77cx" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.301845 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-7kbkq"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.303478 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-7kbkq" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.303760 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-jjssl" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.317268 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-smh4r" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.347187 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-9zvz7"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.359352 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-vgv2l"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.365410 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-7kbkq"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.387535 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.392142 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.394821 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rggg9\" (UniqueName: \"kubernetes.io/projected/d2fab4cb-dff4-439e-a97b-b35b8a2203c6-kube-api-access-rggg9\") pod \"ironic-operator-controller-manager-99b499f4-q77cx\" (UID: \"d2fab4cb-dff4-439e-a97b-b35b8a2203c6\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-q77cx" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.394958 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxcgn\" (UniqueName: \"kubernetes.io/projected/edbe0de9-67d0-49cc-a867-3483035e3c51-kube-api-access-qxcgn\") pod \"keystone-operator-controller-manager-7454b96578-2h8fr\" (UID: \"edbe0de9-67d0-49cc-a867-3483035e3c51\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-2h8fr" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.394995 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkjds\" (UniqueName: \"kubernetes.io/projected/5fbf7159-3ac4-4387-a4e5-c9a42cc9e035-kube-api-access-lkjds\") pod \"manila-operator-controller-manager-58f887965d-9zvz7\" (UID: \"5fbf7159-3ac4-4387-a4e5-c9a42cc9e035\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-9zvz7" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.395518 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vvbh\" (UniqueName: \"kubernetes.io/projected/42e3cbe3-ad98-46e4-9a27-497ad6ca2026-kube-api-access-7vvbh\") pod \"neutron-operator-controller-manager-78bd47f458-7kbkq\" (UID: \"42e3cbe3-ad98-46e4-9a27-497ad6ca2026\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-7kbkq" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.395553 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjwmr\" (UniqueName: \"kubernetes.io/projected/6a9d3c2c-4f10-4d08-bade-aa93ac52e7be-kube-api-access-xjwmr\") pod \"mariadb-operator-controller-manager-54b5986bb8-vgv2l\" (UID: \"6a9d3c2c-4f10-4d08-bade-aa93ac52e7be\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-vgv2l" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.397339 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.397374 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.408457 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-hjbpc" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.408633 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-kbdqc" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.424604 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.425130 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.430206 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.431698 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.437012 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rggg9\" (UniqueName: \"kubernetes.io/projected/d2fab4cb-dff4-439e-a97b-b35b8a2203c6-kube-api-access-rggg9\") pod \"ironic-operator-controller-manager-99b499f4-q77cx\" (UID: \"d2fab4cb-dff4-439e-a97b-b35b8a2203c6\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-q77cx" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.437757 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxcgn\" (UniqueName: \"kubernetes.io/projected/edbe0de9-67d0-49cc-a867-3483035e3c51-kube-api-access-qxcgn\") pod \"keystone-operator-controller-manager-7454b96578-2h8fr\" (UID: \"edbe0de9-67d0-49cc-a867-3483035e3c51\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-2h8fr" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.438003 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.438247 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xv5r7" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.441707 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.450702 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.452774 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-h848j" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.465400 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-cj546"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.483028 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-cj546" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.486576 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-76fvq" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.498438 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-2h8fr" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.499923 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.501410 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vmch\" (UniqueName: \"kubernetes.io/projected/38bd8adb-717b-4ad8-af98-afe361890a1d-kube-api-access-6vmch\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-rk24x\" (UID: \"38bd8adb-717b-4ad8-af98-afe361890a1d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.501441 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms7wn\" (UniqueName: \"kubernetes.io/projected/cf5a2355-2895-4522-b4dc-cca47eb2d33f-kube-api-access-ms7wn\") pod \"ovn-operator-controller-manager-54fc5f65b7-q6dxg\" (UID: \"cf5a2355-2895-4522-b4dc-cca47eb2d33f\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.501479 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbszz\" (UniqueName: \"kubernetes.io/projected/be206532-b60c-4047-8835-1b57d1714883-kube-api-access-bbszz\") pod \"nova-operator-controller-manager-cfbb9c588-wvz4p\" (UID: \"be206532-b60c-4047-8835-1b57d1714883\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.501586 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hthxw\" (UniqueName: \"kubernetes.io/projected/32d872bd-6c15-4efa-9c97-9feeebf99191-kube-api-access-hthxw\") pod \"octavia-operator-controller-manager-54cfbf4c7d-q6kcx\" (UID: \"32d872bd-6c15-4efa-9c97-9feeebf99191\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.501627 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vvbh\" (UniqueName: \"kubernetes.io/projected/42e3cbe3-ad98-46e4-9a27-497ad6ca2026-kube-api-access-7vvbh\") pod \"neutron-operator-controller-manager-78bd47f458-7kbkq\" (UID: \"42e3cbe3-ad98-46e4-9a27-497ad6ca2026\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-7kbkq" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.505379 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjwmr\" (UniqueName: \"kubernetes.io/projected/6a9d3c2c-4f10-4d08-bade-aa93ac52e7be-kube-api-access-xjwmr\") pod \"mariadb-operator-controller-manager-54b5986bb8-vgv2l\" (UID: \"6a9d3c2c-4f10-4d08-bade-aa93ac52e7be\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-vgv2l" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.505607 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38bd8adb-717b-4ad8-af98-afe361890a1d-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-rk24x\" (UID: \"38bd8adb-717b-4ad8-af98-afe361890a1d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.505639 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkjds\" (UniqueName: \"kubernetes.io/projected/5fbf7159-3ac4-4387-a4e5-c9a42cc9e035-kube-api-access-lkjds\") pod \"manila-operator-controller-manager-58f887965d-9zvz7\" (UID: \"5fbf7159-3ac4-4387-a4e5-c9a42cc9e035\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-9zvz7" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.515090 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.524424 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vvbh\" (UniqueName: \"kubernetes.io/projected/42e3cbe3-ad98-46e4-9a27-497ad6ca2026-kube-api-access-7vvbh\") pod \"neutron-operator-controller-manager-78bd47f458-7kbkq\" (UID: \"42e3cbe3-ad98-46e4-9a27-497ad6ca2026\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-7kbkq" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.535039 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-q77cx" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.541147 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-x8n72"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.542599 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d656998f4-x8n72" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.547954 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-ttxx2" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.557371 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-cj546"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.568321 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkjds\" (UniqueName: \"kubernetes.io/projected/5fbf7159-3ac4-4387-a4e5-c9a42cc9e035-kube-api-access-lkjds\") pod \"manila-operator-controller-manager-58f887965d-9zvz7\" (UID: \"5fbf7159-3ac4-4387-a4e5-c9a42cc9e035\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-9zvz7" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.569474 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjwmr\" (UniqueName: \"kubernetes.io/projected/6a9d3c2c-4f10-4d08-bade-aa93ac52e7be-kube-api-access-xjwmr\") pod \"mariadb-operator-controller-manager-54b5986bb8-vgv2l\" (UID: \"6a9d3c2c-4f10-4d08-bade-aa93ac52e7be\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-vgv2l" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.569956 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-x8n72"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.582079 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.583580 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.585782 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ffd5q" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.605358 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58f887965d-9zvz7" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.611055 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5mc5\" (UniqueName: \"kubernetes.io/projected/4599c525-39b6-412f-b668-79c5e575c42e-kube-api-access-q5mc5\") pod \"placement-operator-controller-manager-5b797b8dff-cj546\" (UID: \"4599c525-39b6-412f-b668-79c5e575c42e\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-cj546" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.611109 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38bd8adb-717b-4ad8-af98-afe361890a1d-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-rk24x\" (UID: \"38bd8adb-717b-4ad8-af98-afe361890a1d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.611236 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vmch\" (UniqueName: \"kubernetes.io/projected/38bd8adb-717b-4ad8-af98-afe361890a1d-kube-api-access-6vmch\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-rk24x\" (UID: \"38bd8adb-717b-4ad8-af98-afe361890a1d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.611285 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms7wn\" (UniqueName: \"kubernetes.io/projected/cf5a2355-2895-4522-b4dc-cca47eb2d33f-kube-api-access-ms7wn\") pod \"ovn-operator-controller-manager-54fc5f65b7-q6dxg\" (UID: \"cf5a2355-2895-4522-b4dc-cca47eb2d33f\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.611336 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbszz\" (UniqueName: \"kubernetes.io/projected/be206532-b60c-4047-8835-1b57d1714883-kube-api-access-bbszz\") pod \"nova-operator-controller-manager-cfbb9c588-wvz4p\" (UID: \"be206532-b60c-4047-8835-1b57d1714883\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.611359 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hthxw\" (UniqueName: \"kubernetes.io/projected/32d872bd-6c15-4efa-9c97-9feeebf99191-kube-api-access-hthxw\") pod \"octavia-operator-controller-manager-54cfbf4c7d-q6kcx\" (UID: \"32d872bd-6c15-4efa-9c97-9feeebf99191\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.611473 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eecabfcc-62de-4512-b5e8-1685d7fd1144-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-r4sjj\" (UID: \"eecabfcc-62de-4512-b5e8-1685d7fd1144\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.611553 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qq9l\" (UniqueName: \"kubernetes.io/projected/0fb5a95d-61ef-4850-ba59-0d637233ae88-kube-api-access-9qq9l\") pod \"swift-operator-controller-manager-d656998f4-x8n72\" (UID: \"0fb5a95d-61ef-4850-ba59-0d637233ae88\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-x8n72" Nov 24 11:34:01 crc kubenswrapper[4678]: E1124 11:34:01.611244 4678 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.616472 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-bts74"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.618380 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bts74" Nov 24 11:34:01 crc kubenswrapper[4678]: E1124 11:34:01.618794 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38bd8adb-717b-4ad8-af98-afe361890a1d-cert podName:38bd8adb-717b-4ad8-af98-afe361890a1d nodeName:}" failed. No retries permitted until 2025-11-24 11:34:02.118765488 +0000 UTC m=+1053.049825117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/38bd8adb-717b-4ad8-af98-afe361890a1d-cert") pod "openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" (UID: "38bd8adb-717b-4ad8-af98-afe361890a1d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.624348 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-zbf75" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.633709 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-vgv2l" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.635049 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.643843 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-bts74"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.650580 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eecabfcc-62de-4512-b5e8-1685d7fd1144-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-r4sjj\" (UID: \"eecabfcc-62de-4512-b5e8-1685d7fd1144\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.677086 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vmch\" (UniqueName: \"kubernetes.io/projected/38bd8adb-717b-4ad8-af98-afe361890a1d-kube-api-access-6vmch\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-rk24x\" (UID: \"38bd8adb-717b-4ad8-af98-afe361890a1d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.677941 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-7kbkq" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.695649 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbszz\" (UniqueName: \"kubernetes.io/projected/be206532-b60c-4047-8835-1b57d1714883-kube-api-access-bbszz\") pod \"nova-operator-controller-manager-cfbb9c588-wvz4p\" (UID: \"be206532-b60c-4047-8835-1b57d1714883\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.696206 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms7wn\" (UniqueName: \"kubernetes.io/projected/cf5a2355-2895-4522-b4dc-cca47eb2d33f-kube-api-access-ms7wn\") pod \"ovn-operator-controller-manager-54fc5f65b7-q6dxg\" (UID: \"cf5a2355-2895-4522-b4dc-cca47eb2d33f\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.696639 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hthxw\" (UniqueName: \"kubernetes.io/projected/32d872bd-6c15-4efa-9c97-9feeebf99191-kube-api-access-hthxw\") pod \"octavia-operator-controller-manager-54cfbf4c7d-q6kcx\" (UID: \"32d872bd-6c15-4efa-9c97-9feeebf99191\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.715958 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfgkf\" (UniqueName: \"kubernetes.io/projected/2e9318f0-ff18-4a7b-8a43-2c37c3d0d593-kube-api-access-lfgkf\") pod \"test-operator-controller-manager-b4c496f69-bts74\" (UID: \"2e9318f0-ff18-4a7b-8a43-2c37c3d0d593\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-bts74" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.716101 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qq9l\" (UniqueName: \"kubernetes.io/projected/0fb5a95d-61ef-4850-ba59-0d637233ae88-kube-api-access-9qq9l\") pod \"swift-operator-controller-manager-d656998f4-x8n72\" (UID: \"0fb5a95d-61ef-4850-ba59-0d637233ae88\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-x8n72" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.716134 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkfr4\" (UniqueName: \"kubernetes.io/projected/d494c9ab-cbef-4a2a-a865-2921ec2ab9e7-kube-api-access-fkfr4\") pod \"telemetry-operator-controller-manager-7d86657865-d4wl2\" (UID: \"d494c9ab-cbef-4a2a-a865-2921ec2ab9e7\") " pod="openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.716231 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5mc5\" (UniqueName: \"kubernetes.io/projected/4599c525-39b6-412f-b668-79c5e575c42e-kube-api-access-q5mc5\") pod \"placement-operator-controller-manager-5b797b8dff-cj546\" (UID: \"4599c525-39b6-412f-b668-79c5e575c42e\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-cj546" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.737908 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.750770 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.754000 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.763040 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-n2tmw" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.779517 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.786017 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.791163 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5mc5\" (UniqueName: \"kubernetes.io/projected/4599c525-39b6-412f-b668-79c5e575c42e-kube-api-access-q5mc5\") pod \"placement-operator-controller-manager-5b797b8dff-cj546\" (UID: \"4599c525-39b6-412f-b668-79c5e575c42e\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-cj546" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.791498 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qq9l\" (UniqueName: \"kubernetes.io/projected/0fb5a95d-61ef-4850-ba59-0d637233ae88-kube-api-access-9qq9l\") pod \"swift-operator-controller-manager-d656998f4-x8n72\" (UID: \"0fb5a95d-61ef-4850-ba59-0d637233ae88\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-x8n72" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.820113 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkfr4\" (UniqueName: \"kubernetes.io/projected/d494c9ab-cbef-4a2a-a865-2921ec2ab9e7-kube-api-access-fkfr4\") pod \"telemetry-operator-controller-manager-7d86657865-d4wl2\" (UID: \"d494c9ab-cbef-4a2a-a865-2921ec2ab9e7\") " pod="openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.830741 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwxj8\" (UniqueName: \"kubernetes.io/projected/61e95e5c-75b3-4d08-acdd-d28fa075a707-kube-api-access-nwxj8\") pod \"watcher-operator-controller-manager-8c6448b9f-5q2rm\" (UID: \"61e95e5c-75b3-4d08-acdd-d28fa075a707\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.830933 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfgkf\" (UniqueName: \"kubernetes.io/projected/2e9318f0-ff18-4a7b-8a43-2c37c3d0d593-kube-api-access-lfgkf\") pod \"test-operator-controller-manager-b4c496f69-bts74\" (UID: \"2e9318f0-ff18-4a7b-8a43-2c37c3d0d593\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-bts74" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.831167 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.839375 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.847254 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkfr4\" (UniqueName: \"kubernetes.io/projected/d494c9ab-cbef-4a2a-a865-2921ec2ab9e7-kube-api-access-fkfr4\") pod \"telemetry-operator-controller-manager-7d86657865-d4wl2\" (UID: \"d494c9ab-cbef-4a2a-a865-2921ec2ab9e7\") " pod="openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.851022 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfgkf\" (UniqueName: \"kubernetes.io/projected/2e9318f0-ff18-4a7b-8a43-2c37c3d0d593-kube-api-access-lfgkf\") pod \"test-operator-controller-manager-b4c496f69-bts74\" (UID: \"2e9318f0-ff18-4a7b-8a43-2c37c3d0d593\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-bts74" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.884458 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk"] Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.907523 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.918343 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.918789 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2vhw7" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.932824 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwxj8\" (UniqueName: \"kubernetes.io/projected/61e95e5c-75b3-4d08-acdd-d28fa075a707-kube-api-access-nwxj8\") pod \"watcher-operator-controller-manager-8c6448b9f-5q2rm\" (UID: \"61e95e5c-75b3-4d08-acdd-d28fa075a707\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm" Nov 24 11:34:01 crc kubenswrapper[4678]: I1124 11:34:01.953375 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwxj8\" (UniqueName: \"kubernetes.io/projected/61e95e5c-75b3-4d08-acdd-d28fa075a707-kube-api-access-nwxj8\") pod \"watcher-operator-controller-manager-8c6448b9f-5q2rm\" (UID: \"61e95e5c-75b3-4d08-acdd-d28fa075a707\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.002541 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk"] Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.002911 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-npq55"] Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.004022 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-npq55"] Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.004122 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-npq55" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.012625 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-cj546" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.015349 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-z42nz" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.036037 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd964\" (UniqueName: \"kubernetes.io/projected/9312f8b9-ab92-4e86-8793-15eb73032357-kube-api-access-kd964\") pod \"openstack-operator-controller-manager-b94c7cdcb-pd6lk\" (UID: \"9312f8b9-ab92-4e86-8793-15eb73032357\") " pod="openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.036419 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9312f8b9-ab92-4e86-8793-15eb73032357-cert\") pod \"openstack-operator-controller-manager-b94c7cdcb-pd6lk\" (UID: \"9312f8b9-ab92-4e86-8793-15eb73032357\") " pod="openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.044608 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d656998f4-x8n72" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.062094 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.097099 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bts74" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.121845 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.139564 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9312f8b9-ab92-4e86-8793-15eb73032357-cert\") pod \"openstack-operator-controller-manager-b94c7cdcb-pd6lk\" (UID: \"9312f8b9-ab92-4e86-8793-15eb73032357\") " pod="openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.139720 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38bd8adb-717b-4ad8-af98-afe361890a1d-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-rk24x\" (UID: \"38bd8adb-717b-4ad8-af98-afe361890a1d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.139783 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s9v8\" (UniqueName: \"kubernetes.io/projected/eff9ae6e-ce8e-4a8c-a862-4cb4e4e75560-kube-api-access-8s9v8\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-npq55\" (UID: \"eff9ae6e-ce8e-4a8c-a862-4cb4e4e75560\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-npq55" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.139805 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd964\" (UniqueName: \"kubernetes.io/projected/9312f8b9-ab92-4e86-8793-15eb73032357-kube-api-access-kd964\") pod \"openstack-operator-controller-manager-b94c7cdcb-pd6lk\" (UID: \"9312f8b9-ab92-4e86-8793-15eb73032357\") " pod="openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk" Nov 24 11:34:02 crc kubenswrapper[4678]: E1124 11:34:02.141239 4678 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 11:34:02 crc kubenswrapper[4678]: E1124 11:34:02.141327 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38bd8adb-717b-4ad8-af98-afe361890a1d-cert podName:38bd8adb-717b-4ad8-af98-afe361890a1d nodeName:}" failed. No retries permitted until 2025-11-24 11:34:03.141293581 +0000 UTC m=+1054.072353220 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/38bd8adb-717b-4ad8-af98-afe361890a1d-cert") pod "openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" (UID: "38bd8adb-717b-4ad8-af98-afe361890a1d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.154322 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9312f8b9-ab92-4e86-8793-15eb73032357-cert\") pod \"openstack-operator-controller-manager-b94c7cdcb-pd6lk\" (UID: \"9312f8b9-ab92-4e86-8793-15eb73032357\") " pod="openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.172317 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd964\" (UniqueName: \"kubernetes.io/projected/9312f8b9-ab92-4e86-8793-15eb73032357-kube-api-access-kd964\") pod \"openstack-operator-controller-manager-b94c7cdcb-pd6lk\" (UID: \"9312f8b9-ab92-4e86-8793-15eb73032357\") " pod="openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.241899 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s9v8\" (UniqueName: \"kubernetes.io/projected/eff9ae6e-ce8e-4a8c-a862-4cb4e4e75560-kube-api-access-8s9v8\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-npq55\" (UID: \"eff9ae6e-ce8e-4a8c-a862-4cb4e4e75560\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-npq55" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.294285 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s9v8\" (UniqueName: \"kubernetes.io/projected/eff9ae6e-ce8e-4a8c-a862-4cb4e4e75560-kube-api-access-8s9v8\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-npq55\" (UID: \"eff9ae6e-ce8e-4a8c-a862-4cb4e4e75560\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-npq55" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.438591 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.498876 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-gmrd8"] Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.523724 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j"] Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.524848 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-npq55" Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.572217 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-nxdjc"] Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.916931 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-nxdjc" event={"ID":"7f7a3294-7af7-44cb-95b7-3214cda4de48","Type":"ContainerStarted","Data":"fd375df0d67915b348cb8dd857ae66c71cf29b543705c0205cf91e91384e33f8"} Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.930620 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j" event={"ID":"1d845025-efc3-47c5-b640-59eeafc744a2","Type":"ContainerStarted","Data":"d3b48255a603dad0a61f30c78c17283941074cde3be7586f9a0e2354089f57c7"} Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.932430 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gmrd8" event={"ID":"e50daf7a-089a-48d0-883f-5db082bb6908","Type":"ContainerStarted","Data":"59d3900f7545c4c174337f7e71caa8d05794473a0e81edf1e5cd3268cbd58011"} Nov 24 11:34:02 crc kubenswrapper[4678]: I1124 11:34:02.976025 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-cxm7x"] Nov 24 11:34:03 crc kubenswrapper[4678]: I1124 11:34:03.178292 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38bd8adb-717b-4ad8-af98-afe361890a1d-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-rk24x\" (UID: \"38bd8adb-717b-4ad8-af98-afe361890a1d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" Nov 24 11:34:03 crc kubenswrapper[4678]: I1124 11:34:03.197396 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38bd8adb-717b-4ad8-af98-afe361890a1d-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-rk24x\" (UID: \"38bd8adb-717b-4ad8-af98-afe361890a1d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" Nov 24 11:34:03 crc kubenswrapper[4678]: I1124 11:34:03.341986 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" Nov 24 11:34:03 crc kubenswrapper[4678]: I1124 11:34:03.536591 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-2h8fr"] Nov 24 11:34:03 crc kubenswrapper[4678]: I1124 11:34:03.576803 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-q77cx"] Nov 24 11:34:03 crc kubenswrapper[4678]: W1124 11:34:03.598810 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedbe0de9_67d0_49cc_a867_3483035e3c51.slice/crio-09247959a928f7b43bc259bad90221e1dcc90c4cfac047d082ad1523b44d1ba8 WatchSource:0}: Error finding container 09247959a928f7b43bc259bad90221e1dcc90c4cfac047d082ad1523b44d1ba8: Status 404 returned error can't find the container with id 09247959a928f7b43bc259bad90221e1dcc90c4cfac047d082ad1523b44d1ba8 Nov 24 11:34:03 crc kubenswrapper[4678]: I1124 11:34:03.604363 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-jk9k4"] Nov 24 11:34:03 crc kubenswrapper[4678]: I1124 11:34:03.704011 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-vgv2l"] Nov 24 11:34:03 crc kubenswrapper[4678]: I1124 11:34:03.710715 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-jjbs2"] Nov 24 11:34:03 crc kubenswrapper[4678]: W1124 11:34:03.745895 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf98bea89_6852_42c9_a69b_9867fe021eb8.slice/crio-7e91ec13153dbed7782a769c90f58c30eb383a2f80296cd7a8eb32ed520400eb WatchSource:0}: Error finding container 7e91ec13153dbed7782a769c90f58c30eb383a2f80296cd7a8eb32ed520400eb: Status 404 returned error can't find the container with id 7e91ec13153dbed7782a769c90f58c30eb383a2f80296cd7a8eb32ed520400eb Nov 24 11:34:03 crc kubenswrapper[4678]: I1124 11:34:03.954403 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-q77cx" event={"ID":"d2fab4cb-dff4-439e-a97b-b35b8a2203c6","Type":"ContainerStarted","Data":"9fece19c4b163672bc863c6dd8bd18629c5a2378da1369f88db9bdd35b8ccd87"} Nov 24 11:34:03 crc kubenswrapper[4678]: I1124 11:34:03.957324 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-2h8fr" event={"ID":"edbe0de9-67d0-49cc-a867-3483035e3c51","Type":"ContainerStarted","Data":"09247959a928f7b43bc259bad90221e1dcc90c4cfac047d082ad1523b44d1ba8"} Nov 24 11:34:03 crc kubenswrapper[4678]: I1124 11:34:03.960015 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-vgv2l" event={"ID":"6a9d3c2c-4f10-4d08-bade-aa93ac52e7be","Type":"ContainerStarted","Data":"9a8622ea5c35ef3bb1421855b13fec048be4d8e22fd6a60c464a0dd25fbf2ff5"} Nov 24 11:34:03 crc kubenswrapper[4678]: I1124 11:34:03.964539 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-jjbs2" event={"ID":"f98bea89-6852-42c9-a69b-9867fe021eb8","Type":"ContainerStarted","Data":"7e91ec13153dbed7782a769c90f58c30eb383a2f80296cd7a8eb32ed520400eb"} Nov 24 11:34:03 crc kubenswrapper[4678]: I1124 11:34:03.974330 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-cxm7x" event={"ID":"276b61c4-dec2-4f5e-a5bd-ac814c7d0fc5","Type":"ContainerStarted","Data":"7f37a72decb821e7bce97461b1070c370cf571dee6bd712205663734ca23be8e"} Nov 24 11:34:03 crc kubenswrapper[4678]: I1124 11:34:03.977548 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-jk9k4" event={"ID":"e9db91a3-68e2-4500-ab6a-d1055c6e6dde","Type":"ContainerStarted","Data":"d42103d98f2fbfeaa1eb4700fcef926fb3e47143ab40fdddd47ffb5b398b123e"} Nov 24 11:34:04 crc kubenswrapper[4678]: I1124 11:34:04.078704 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj"] Nov 24 11:34:04 crc kubenswrapper[4678]: I1124 11:34:04.088012 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-bts74"] Nov 24 11:34:04 crc kubenswrapper[4678]: W1124 11:34:04.095620 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e9318f0_ff18_4a7b_8a43_2c37c3d0d593.slice/crio-229a73a0813105eeb538780ae923a7b10ae1ff999cf38a414cd91dc92f943edf WatchSource:0}: Error finding container 229a73a0813105eeb538780ae923a7b10ae1ff999cf38a414cd91dc92f943edf: Status 404 returned error can't find the container with id 229a73a0813105eeb538780ae923a7b10ae1ff999cf38a414cd91dc92f943edf Nov 24 11:34:04 crc kubenswrapper[4678]: I1124 11:34:04.141496 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-cj546"] Nov 24 11:34:04 crc kubenswrapper[4678]: I1124 11:34:04.190283 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2"] Nov 24 11:34:04 crc kubenswrapper[4678]: I1124 11:34:04.234996 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-7kbkq"] Nov 24 11:34:04 crc kubenswrapper[4678]: I1124 11:34:04.252509 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-9zvz7"] Nov 24 11:34:04 crc kubenswrapper[4678]: W1124 11:34:04.264936 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd494c9ab_cbef_4a2a_a865_2921ec2ab9e7.slice/crio-1e9d693a5f64839bf3d4660af3626f06072b75bd233e01ff538735b8a0282c3e WatchSource:0}: Error finding container 1e9d693a5f64839bf3d4660af3626f06072b75bd233e01ff538735b8a0282c3e: Status 404 returned error can't find the container with id 1e9d693a5f64839bf3d4660af3626f06072b75bd233e01ff538735b8a0282c3e Nov 24 11:34:04 crc kubenswrapper[4678]: W1124 11:34:04.266875 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42e3cbe3_ad98_46e4_9a27_497ad6ca2026.slice/crio-58c4ea673bc0ae72fd19a2a7f43f318d80957011e446d9ba230df230549b804c WatchSource:0}: Error finding container 58c4ea673bc0ae72fd19a2a7f43f318d80957011e446d9ba230df230549b804c: Status 404 returned error can't find the container with id 58c4ea673bc0ae72fd19a2a7f43f318d80957011e446d9ba230df230549b804c Nov 24 11:34:04 crc kubenswrapper[4678]: W1124 11:34:04.286564 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fbf7159_3ac4_4387_a4e5_c9a42cc9e035.slice/crio-bc1e8d279383d2a3ec6c24183de3cea5b530c3b1c4b9c9ec2e0786fddd734b68 WatchSource:0}: Error finding container bc1e8d279383d2a3ec6c24183de3cea5b530c3b1c4b9c9ec2e0786fddd734b68: Status 404 returned error can't find the container with id bc1e8d279383d2a3ec6c24183de3cea5b530c3b1c4b9c9ec2e0786fddd734b68 Nov 24 11:34:04 crc kubenswrapper[4678]: I1124 11:34:04.662150 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg"] Nov 24 11:34:04 crc kubenswrapper[4678]: I1124 11:34:04.832741 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p"] Nov 24 11:34:04 crc kubenswrapper[4678]: I1124 11:34:04.852899 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-npq55"] Nov 24 11:34:04 crc kubenswrapper[4678]: I1124 11:34:04.861922 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm"] Nov 24 11:34:04 crc kubenswrapper[4678]: W1124 11:34:04.875546 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeff9ae6e_ce8e_4a8c_a862_4cb4e4e75560.slice/crio-56969d2cd5fdac5d8d6f65ba7bf53ecadb663e5a8b7b66d32b556d60c37580c1 WatchSource:0}: Error finding container 56969d2cd5fdac5d8d6f65ba7bf53ecadb663e5a8b7b66d32b556d60c37580c1: Status 404 returned error can't find the container with id 56969d2cd5fdac5d8d6f65ba7bf53ecadb663e5a8b7b66d32b556d60c37580c1 Nov 24 11:34:04 crc kubenswrapper[4678]: I1124 11:34:04.877052 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx"] Nov 24 11:34:04 crc kubenswrapper[4678]: I1124 11:34:04.890962 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk"] Nov 24 11:34:04 crc kubenswrapper[4678]: I1124 11:34:04.893986 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-x8n72"] Nov 24 11:34:05 crc kubenswrapper[4678]: I1124 11:34:05.002344 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x"] Nov 24 11:34:05 crc kubenswrapper[4678]: I1124 11:34:05.062870 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk" event={"ID":"9312f8b9-ab92-4e86-8793-15eb73032357","Type":"ContainerStarted","Data":"dae0ac66f27dea636f68a4d8e512b50c82b8c9dd99b3680c19f1497e2ac8a080"} Nov 24 11:34:05 crc kubenswrapper[4678]: I1124 11:34:05.070136 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj" event={"ID":"eecabfcc-62de-4512-b5e8-1685d7fd1144","Type":"ContainerStarted","Data":"8a217391cf240005b7d3a3c6f3cb0d8ee5ad7f9331fc32c7ea50f2588ce13bd6"} Nov 24 11:34:05 crc kubenswrapper[4678]: I1124 11:34:05.087936 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-npq55" event={"ID":"eff9ae6e-ce8e-4a8c-a862-4cb4e4e75560","Type":"ContainerStarted","Data":"56969d2cd5fdac5d8d6f65ba7bf53ecadb663e5a8b7b66d32b556d60c37580c1"} Nov 24 11:34:05 crc kubenswrapper[4678]: I1124 11:34:05.107907 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-7kbkq" event={"ID":"42e3cbe3-ad98-46e4-9a27-497ad6ca2026","Type":"ContainerStarted","Data":"58c4ea673bc0ae72fd19a2a7f43f318d80957011e446d9ba230df230549b804c"} Nov 24 11:34:05 crc kubenswrapper[4678]: I1124 11:34:05.128980 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-cj546" event={"ID":"4599c525-39b6-412f-b668-79c5e575c42e","Type":"ContainerStarted","Data":"07faa928fc72e4f2167c6c70ac0ba42f7dadc98dc6ac4e03ae4b7fcf737e408b"} Nov 24 11:34:05 crc kubenswrapper[4678]: I1124 11:34:05.137071 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p" event={"ID":"be206532-b60c-4047-8835-1b57d1714883","Type":"ContainerStarted","Data":"3f55ff36c7bf75ee1f5218090bdc80f0b7f3a4ed5cb48d430d591e516a0eb2ab"} Nov 24 11:34:05 crc kubenswrapper[4678]: E1124 11:34:05.137122 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vmch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-8c7444f48-rk24x_openstack-operators(38bd8adb-717b-4ad8-af98-afe361890a1d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 11:34:05 crc kubenswrapper[4678]: I1124 11:34:05.141993 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2" event={"ID":"d494c9ab-cbef-4a2a-a865-2921ec2ab9e7","Type":"ContainerStarted","Data":"1e9d693a5f64839bf3d4660af3626f06072b75bd233e01ff538735b8a0282c3e"} Nov 24 11:34:05 crc kubenswrapper[4678]: I1124 11:34:05.161288 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg" event={"ID":"cf5a2355-2895-4522-b4dc-cca47eb2d33f","Type":"ContainerStarted","Data":"346d75cf9ca75c7258096c8f9970724cd3e6045cec87e34278e93ccee683245f"} Nov 24 11:34:05 crc kubenswrapper[4678]: I1124 11:34:05.162180 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm" event={"ID":"61e95e5c-75b3-4d08-acdd-d28fa075a707","Type":"ContainerStarted","Data":"f613690d7e5df572611642ff05058bb3e1a22125f4f9bae5ff3201be3b9a96ff"} Nov 24 11:34:05 crc kubenswrapper[4678]: I1124 11:34:05.164726 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bts74" event={"ID":"2e9318f0-ff18-4a7b-8a43-2c37c3d0d593","Type":"ContainerStarted","Data":"229a73a0813105eeb538780ae923a7b10ae1ff999cf38a414cd91dc92f943edf"} Nov 24 11:34:05 crc kubenswrapper[4678]: I1124 11:34:05.165563 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-9zvz7" event={"ID":"5fbf7159-3ac4-4387-a4e5-c9a42cc9e035","Type":"ContainerStarted","Data":"bc1e8d279383d2a3ec6c24183de3cea5b530c3b1c4b9c9ec2e0786fddd734b68"} Nov 24 11:34:05 crc kubenswrapper[4678]: E1124 11:34:05.400834 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" podUID="38bd8adb-717b-4ad8-af98-afe361890a1d" Nov 24 11:34:06 crc kubenswrapper[4678]: I1124 11:34:06.218652 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" event={"ID":"38bd8adb-717b-4ad8-af98-afe361890a1d","Type":"ContainerStarted","Data":"1d797d5c1b5812e1e03ea5b529d6d78f06e3280247430e7e3ed60cb87f6af7e1"} Nov 24 11:34:06 crc kubenswrapper[4678]: I1124 11:34:06.219163 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" event={"ID":"38bd8adb-717b-4ad8-af98-afe361890a1d","Type":"ContainerStarted","Data":"bdac6334ea7dfa109e09191c3d2660084fcf999ad1aef4b9ff549a02dfaeb949"} Nov 24 11:34:06 crc kubenswrapper[4678]: E1124 11:34:06.222603 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" podUID="38bd8adb-717b-4ad8-af98-afe361890a1d" Nov 24 11:34:06 crc kubenswrapper[4678]: I1124 11:34:06.234917 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx" event={"ID":"32d872bd-6c15-4efa-9c97-9feeebf99191","Type":"ContainerStarted","Data":"5012ebb9c938286b18e611931fec60e8487c8b542d9279070dc6f2014dee0ec5"} Nov 24 11:34:06 crc kubenswrapper[4678]: I1124 11:34:06.242254 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk" event={"ID":"9312f8b9-ab92-4e86-8793-15eb73032357","Type":"ContainerStarted","Data":"c49e506746084d0a44ee86b7ed67a2070e26532901ef1ab5d291332d1e9e3a02"} Nov 24 11:34:06 crc kubenswrapper[4678]: I1124 11:34:06.242314 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk" event={"ID":"9312f8b9-ab92-4e86-8793-15eb73032357","Type":"ContainerStarted","Data":"6917fe1bfccd2b42d9e2a5004af532ca9bdf6eb088a0d8ca300c11770ad16c9e"} Nov 24 11:34:06 crc kubenswrapper[4678]: I1124 11:34:06.243392 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk" Nov 24 11:34:06 crc kubenswrapper[4678]: I1124 11:34:06.256887 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-x8n72" event={"ID":"0fb5a95d-61ef-4850-ba59-0d637233ae88","Type":"ContainerStarted","Data":"7699e80ddbb1b1c6b2fb571aebb6038ac22cc6db758808781c0f421f223e7962"} Nov 24 11:34:06 crc kubenswrapper[4678]: I1124 11:34:06.335427 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk" podStartSLOduration=5.335398735 podStartE2EDuration="5.335398735s" podCreationTimestamp="2025-11-24 11:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:34:06.310289732 +0000 UTC m=+1057.241349371" watchObservedRunningTime="2025-11-24 11:34:06.335398735 +0000 UTC m=+1057.266458374" Nov 24 11:34:07 crc kubenswrapper[4678]: E1124 11:34:07.294400 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" podUID="38bd8adb-717b-4ad8-af98-afe361890a1d" Nov 24 11:34:12 crc kubenswrapper[4678]: I1124 11:34:12.451574 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-b94c7cdcb-pd6lk" Nov 24 11:34:18 crc kubenswrapper[4678]: E1124 11:34:18.844972 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9" Nov 24 11:34:18 crc kubenswrapper[4678]: E1124 11:34:18.845859 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qdz9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-598f69df5d-jk9k4_openstack-operators(e9db91a3-68e2-4500-ab6a-d1055c6e6dde): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:20 crc kubenswrapper[4678]: E1124 11:34:20.316737 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a" Nov 24 11:34:20 crc kubenswrapper[4678]: E1124 11:34:20.317055 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qxcgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7454b96578-2h8fr_openstack-operators(edbe0de9-67d0-49cc-a867-3483035e3c51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:20 crc kubenswrapper[4678]: E1124 11:34:20.843791 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a" Nov 24 11:34:20 crc kubenswrapper[4678]: E1124 11:34:20.844482 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lkjds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-58f887965d-9zvz7_openstack-operators(5fbf7159-3ac4-4387-a4e5-c9a42cc9e035): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:21 crc kubenswrapper[4678]: E1124 11:34:21.338705 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377" Nov 24 11:34:21 crc kubenswrapper[4678]: E1124 11:34:21.338872 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rggg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-99b499f4-q77cx_openstack-operators(d2fab4cb-dff4-439e-a97b-b35b8a2203c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:23 crc kubenswrapper[4678]: E1124 11:34:23.478963 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894" Nov 24 11:34:23 crc kubenswrapper[4678]: E1124 11:34:23.479544 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f957q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-6dd8864d7c-r4sjj_openstack-operators(eecabfcc-62de-4512-b5e8-1685d7fd1144): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:23 crc kubenswrapper[4678]: E1124 11:34:23.934156 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c" Nov 24 11:34:23 crc kubenswrapper[4678]: E1124 11:34:23.934846 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q5mc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b797b8dff-cj546_openstack-operators(4599c525-39b6-412f-b668-79c5e575c42e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:24 crc kubenswrapper[4678]: E1124 11:34:24.914298 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6" Nov 24 11:34:24 crc kubenswrapper[4678]: E1124 11:34:24.914628 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7vvbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78bd47f458-7kbkq_openstack-operators(42e3cbe3-ad98-46e4-9a27-497ad6ca2026): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:25 crc kubenswrapper[4678]: E1124 11:34:25.405706 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04" Nov 24 11:34:25 crc kubenswrapper[4678]: E1124 11:34:25.406116 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjwmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-54b5986bb8-vgv2l_openstack-operators(6a9d3c2c-4f10-4d08-bade-aa93ac52e7be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:25 crc kubenswrapper[4678]: E1124 11:34:25.816542 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96" Nov 24 11:34:25 crc kubenswrapper[4678]: E1124 11:34:25.817186 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f65kj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-56f54d6746-jjbs2_openstack-operators(f98bea89-6852-42c9-a69b-9867fe021eb8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:27 crc kubenswrapper[4678]: E1124 11:34:27.428638 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b" Nov 24 11:34:27 crc kubenswrapper[4678]: E1124 11:34:27.428884 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ms7wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-54fc5f65b7-q6dxg_openstack-operators(cf5a2355-2895-4522-b4dc-cca47eb2d33f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:27 crc kubenswrapper[4678]: E1124 11:34:27.914958 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:70cce55bcf89468c5d468ca2fc317bfc3dc5f2bef1c502df9faca2eb1293ede7" Nov 24 11:34:27 crc kubenswrapper[4678]: E1124 11:34:27.915202 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:70cce55bcf89468c5d468ca2fc317bfc3dc5f2bef1c502df9faca2eb1293ede7,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bdsdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-75fb479bcc-xlx8j_openstack-operators(1d845025-efc3-47c5-b640-59eeafc744a2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:28 crc kubenswrapper[4678]: E1124 11:34:28.400647 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0" Nov 24 11:34:28 crc kubenswrapper[4678]: E1124 11:34:28.404887 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9qq9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d656998f4-x8n72_openstack-operators(0fb5a95d-61ef-4850-ba59-0d637233ae88): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:29 crc kubenswrapper[4678]: E1124 11:34:29.851994 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7" Nov 24 11:34:29 crc kubenswrapper[4678]: E1124 11:34:29.854608 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bbszz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-cfbb9c588-wvz4p_openstack-operators(be206532-b60c-4047-8835-1b57d1714883): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:30 crc kubenswrapper[4678]: I1124 11:34:30.297355 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:34:30 crc kubenswrapper[4678]: I1124 11:34:30.297458 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:34:30 crc kubenswrapper[4678]: E1124 11:34:30.365122 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13" Nov 24 11:34:30 crc kubenswrapper[4678]: E1124 11:34:30.365463 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hthxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-54cfbf4c7d-q6kcx_openstack-operators(32d872bd-6c15-4efa-9c97-9feeebf99191): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:32 crc kubenswrapper[4678]: E1124 11:34:32.357706 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 24 11:34:32 crc kubenswrapper[4678]: E1124 11:34:32.358404 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8s9v8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-npq55_openstack-operators(eff9ae6e-ce8e-4a8c-a862-4cb4e4e75560): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:32 crc kubenswrapper[4678]: E1124 11:34:32.359704 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-npq55" podUID="eff9ae6e-ce8e-4a8c-a862-4cb4e4e75560" Nov 24 11:34:32 crc kubenswrapper[4678]: E1124 11:34:32.430038 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.106:5001/openstack-k8s-operators/telemetry-operator:6280f54c4d86e239852669b9aa334e584f1fe080" Nov 24 11:34:32 crc kubenswrapper[4678]: E1124 11:34:32.430145 4678 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.106:5001/openstack-k8s-operators/telemetry-operator:6280f54c4d86e239852669b9aa334e584f1fe080" Nov 24 11:34:32 crc kubenswrapper[4678]: E1124 11:34:32.430312 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.106:5001/openstack-k8s-operators/telemetry-operator:6280f54c4d86e239852669b9aa334e584f1fe080,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fkfr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7d86657865-d4wl2_openstack-operators(d494c9ab-cbef-4a2a-a865-2921ec2ab9e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:32 crc kubenswrapper[4678]: E1124 11:34:32.551115 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-npq55" podUID="eff9ae6e-ce8e-4a8c-a862-4cb4e4e75560" Nov 24 11:34:33 crc kubenswrapper[4678]: E1124 11:34:33.404554 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f" Nov 24 11:34:33 crc kubenswrapper[4678]: E1124 11:34:33.405608 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nwxj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-8c6448b9f-5q2rm_openstack-operators(61e95e5c-75b3-4d08-acdd-d28fa075a707): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.051716 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2" podUID="d494c9ab-cbef-4a2a-a865-2921ec2ab9e7" Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.070311 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-7kbkq" podUID="42e3cbe3-ad98-46e4-9a27-497ad6ca2026" Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.077074 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-q77cx" podUID="d2fab4cb-dff4-439e-a97b-b35b8a2203c6" Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.079645 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-jk9k4" podUID="e9db91a3-68e2-4500-ab6a-d1055c6e6dde" Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.089035 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-58f887965d-9zvz7" podUID="5fbf7159-3ac4-4387-a4e5-c9a42cc9e035" Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.112338 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-2h8fr" podUID="edbe0de9-67d0-49cc-a867-3483035e3c51" Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.114998 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-jjbs2" podUID="f98bea89-6852-42c9-a69b-9867fe021eb8" Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.116932 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j" podUID="1d845025-efc3-47c5-b640-59eeafc744a2" Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.140548 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx" podUID="32d872bd-6c15-4efa-9c97-9feeebf99191" Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.197054 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-cj546" podUID="4599c525-39b6-412f-b668-79c5e575c42e" Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.206388 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p" podUID="be206532-b60c-4047-8835-1b57d1714883" Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.250224 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg" podUID="cf5a2355-2895-4522-b4dc-cca47eb2d33f" Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.252599 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj" podUID="eecabfcc-62de-4512-b5e8-1685d7fd1144" Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.305387 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-d656998f4-x8n72" podUID="0fb5a95d-61ef-4850-ba59-0d637233ae88" Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.335376 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm" podUID="61e95e5c-75b3-4d08-acdd-d28fa075a707" Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.358231 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-vgv2l" podUID="6a9d3c2c-4f10-4d08-bade-aa93ac52e7be" Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.571881 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gmrd8" event={"ID":"e50daf7a-089a-48d0-883f-5db082bb6908","Type":"ContainerStarted","Data":"e7bedf87f4455e63741e752fcd69bbaf6a04cadc6a75c33bb99bb445051328a3"} Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.580151 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-jjbs2" event={"ID":"f98bea89-6852-42c9-a69b-9867fe021eb8","Type":"ContainerStarted","Data":"419d06d3f64a3493013e6e78a7eb05093c5eabdbd872711cbc424153f587206d"} Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.582604 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96\\\"\"" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-jjbs2" podUID="f98bea89-6852-42c9-a69b-9867fe021eb8" Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.600025 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" event={"ID":"38bd8adb-717b-4ad8-af98-afe361890a1d","Type":"ContainerStarted","Data":"ed1ba785b5aad502cda70fe46cb220be9d13bc995bfcee2803e75d54a538de84"} Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.601185 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.613290 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-jk9k4" event={"ID":"e9db91a3-68e2-4500-ab6a-d1055c6e6dde","Type":"ContainerStarted","Data":"cdaa502cc1087c85742c2bf11977f78550d75fb2429293dc73063373f246344c"} Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.630463 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj" event={"ID":"eecabfcc-62de-4512-b5e8-1685d7fd1144","Type":"ContainerStarted","Data":"4697be3a1f6ad71dfd627ff9157bf051c020f33d6283159686f939cdc4575024"} Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.637680 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-2h8fr" event={"ID":"edbe0de9-67d0-49cc-a867-3483035e3c51","Type":"ContainerStarted","Data":"eb63e8e9f6fcc17e0adfe1cc608486f9bd0e9483bca82c5fd38af2644c27c6f5"} Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.669040 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg" event={"ID":"cf5a2355-2895-4522-b4dc-cca47eb2d33f","Type":"ContainerStarted","Data":"6233259eb985b59fe5dc779f751100ff66eafb441b625f34eddd3986b339e464"} Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.674257 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg" podUID="cf5a2355-2895-4522-b4dc-cca47eb2d33f" Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.680891 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-cxm7x" event={"ID":"276b61c4-dec2-4f5e-a5bd-ac814c7d0fc5","Type":"ContainerStarted","Data":"c592bf67f339f726a87ee0ee41e804e8981f4e87d02bd652c1f4f0e6382096cf"} Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.714338 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bts74" event={"ID":"2e9318f0-ff18-4a7b-8a43-2c37c3d0d593","Type":"ContainerStarted","Data":"b832f407f35149ee4bc0334f7fecc085c3cd50f474d268af5c2cad32bc55ccf7"} Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.714423 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bts74" event={"ID":"2e9318f0-ff18-4a7b-8a43-2c37c3d0d593","Type":"ContainerStarted","Data":"6640f9d2ebcb2ff5c3b19fe0efb3022a99188af63b60bac2d9d8117370c315a2"} Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.714870 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bts74" Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.729701 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-7kbkq" event={"ID":"42e3cbe3-ad98-46e4-9a27-497ad6ca2026","Type":"ContainerStarted","Data":"a68721467c274a680ef9980e5230fc9dd2dd2ebd6db0c28712ed5cd9c8bd09e1"} Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.731013 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-7kbkq" podUID="42e3cbe3-ad98-46e4-9a27-497ad6ca2026" Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.735536 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-nxdjc" event={"ID":"7f7a3294-7af7-44cb-95b7-3214cda4de48","Type":"ContainerStarted","Data":"566a69791951703c739018ce6c96fee9202009fa6c1aec9e2b1a6166ad6f755f"} Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.747810 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-q77cx" event={"ID":"d2fab4cb-dff4-439e-a97b-b35b8a2203c6","Type":"ContainerStarted","Data":"12ff9ae2b727d77fd21badc83208940df92da9c5832af654b4b6ecf685d0a9e6"} Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.754126 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2" event={"ID":"d494c9ab-cbef-4a2a-a865-2921ec2ab9e7","Type":"ContainerStarted","Data":"045942926b89b530784aaa739baaa55733567245805ce9c7055ff6d3aa6d71c5"} Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.761044 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.106:5001/openstack-k8s-operators/telemetry-operator:6280f54c4d86e239852669b9aa334e584f1fe080\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2" podUID="d494c9ab-cbef-4a2a-a865-2921ec2ab9e7" Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.766681 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" podStartSLOduration=6.228178312 podStartE2EDuration="34.76665236s" podCreationTimestamp="2025-11-24 11:34:00 +0000 UTC" firstStartedPulling="2025-11-24 11:34:05.136278381 +0000 UTC m=+1056.067338020" lastFinishedPulling="2025-11-24 11:34:33.674752429 +0000 UTC m=+1084.605812068" observedRunningTime="2025-11-24 11:34:34.761307398 +0000 UTC m=+1085.692367037" watchObservedRunningTime="2025-11-24 11:34:34.76665236 +0000 UTC m=+1085.697711999" Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.773586 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm" event={"ID":"61e95e5c-75b3-4d08-acdd-d28fa075a707","Type":"ContainerStarted","Data":"492175cff0c1c521c919389a03437b79824fb7c81f1efa973dc61f11fda82785"} Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.778699 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm" podUID="61e95e5c-75b3-4d08-acdd-d28fa075a707" Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.794717 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-9zvz7" event={"ID":"5fbf7159-3ac4-4387-a4e5-c9a42cc9e035","Type":"ContainerStarted","Data":"05e29ca8e0c4f0dceaf2560dc31475216879b5fb1f9264a4536110949accf4db"} Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.832862 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-cj546" event={"ID":"4599c525-39b6-412f-b668-79c5e575c42e","Type":"ContainerStarted","Data":"b69550c1b4d848905da8347a98fd5c7908b38f253b1f69771e2001f199b7cd98"} Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.841546 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p" event={"ID":"be206532-b60c-4047-8835-1b57d1714883","Type":"ContainerStarted","Data":"8c2fbb6fd53154f3f001483bca93d5094e712404cd3d7601547a26370ed29e84"} Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.844257 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\"" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p" podUID="be206532-b60c-4047-8835-1b57d1714883" Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.859229 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-x8n72" event={"ID":"0fb5a95d-61ef-4850-ba59-0d637233ae88","Type":"ContainerStarted","Data":"66f9f66e532975d8e1a3574d711010e26edb11b744f1aa1fbed04a6aa33dfebe"} Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.862900 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-d656998f4-x8n72" podUID="0fb5a95d-61ef-4850-ba59-0d637233ae88" Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.893169 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-vgv2l" event={"ID":"6a9d3c2c-4f10-4d08-bade-aa93ac52e7be","Type":"ContainerStarted","Data":"804d4851274454889b2733a30e24d2f20424af888a3494133ed2098e76084bc6"} Nov 24 11:34:34 crc kubenswrapper[4678]: E1124 11:34:34.901213 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-vgv2l" podUID="6a9d3c2c-4f10-4d08-bade-aa93ac52e7be" Nov 24 11:34:34 crc kubenswrapper[4678]: I1124 11:34:34.923377 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j" event={"ID":"1d845025-efc3-47c5-b640-59eeafc744a2","Type":"ContainerStarted","Data":"6ab7e00d385ce705d3b276464efbf81d833f239089fc256a8037efd6c65cc0f7"} Nov 24 11:34:35 crc kubenswrapper[4678]: I1124 11:34:35.000896 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx" event={"ID":"32d872bd-6c15-4efa-9c97-9feeebf99191","Type":"ContainerStarted","Data":"717efc6bdec01c927e05eafd9daf7e87506459aa947b99fc471d4fde78230f28"} Nov 24 11:34:35 crc kubenswrapper[4678]: E1124 11:34:35.004382 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx" podUID="32d872bd-6c15-4efa-9c97-9feeebf99191" Nov 24 11:34:35 crc kubenswrapper[4678]: E1124 11:34:35.015302 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:70cce55bcf89468c5d468ca2fc317bfc3dc5f2bef1c502df9faca2eb1293ede7\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j" podUID="1d845025-efc3-47c5-b640-59eeafc744a2" Nov 24 11:34:35 crc kubenswrapper[4678]: I1124 11:34:35.235635 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bts74" podStartSLOduration=5.933664988 podStartE2EDuration="34.235613977s" podCreationTimestamp="2025-11-24 11:34:01 +0000 UTC" firstStartedPulling="2025-11-24 11:34:04.112016403 +0000 UTC m=+1055.043076042" lastFinishedPulling="2025-11-24 11:34:32.413965392 +0000 UTC m=+1083.345025031" observedRunningTime="2025-11-24 11:34:35.18904383 +0000 UTC m=+1086.120103459" watchObservedRunningTime="2025-11-24 11:34:35.235613977 +0000 UTC m=+1086.166673616" Nov 24 11:34:36 crc kubenswrapper[4678]: I1124 11:34:36.013103 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-cxm7x" event={"ID":"276b61c4-dec2-4f5e-a5bd-ac814c7d0fc5","Type":"ContainerStarted","Data":"dbb510320ce055eef9e1631862d469eff185a8eb89f7195978de2711d9bfb43e"} Nov 24 11:34:36 crc kubenswrapper[4678]: I1124 11:34:36.013569 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7969689c84-cxm7x" Nov 24 11:34:36 crc kubenswrapper[4678]: I1124 11:34:36.028597 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gmrd8" event={"ID":"e50daf7a-089a-48d0-883f-5db082bb6908","Type":"ContainerStarted","Data":"345119f7cb52db3a09feba505dc7188fb7f56f6d6378b23517a53680618f6383"} Nov 24 11:34:36 crc kubenswrapper[4678]: I1124 11:34:36.029664 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gmrd8" Nov 24 11:34:36 crc kubenswrapper[4678]: I1124 11:34:36.034652 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-nxdjc" event={"ID":"7f7a3294-7af7-44cb-95b7-3214cda4de48","Type":"ContainerStarted","Data":"4b0cf8d91a302fa614d6b38aa8e92b48cd233e245593217a6959610ddc4ea180"} Nov 24 11:34:36 crc kubenswrapper[4678]: E1124 11:34:36.045443 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.106:5001/openstack-k8s-operators/telemetry-operator:6280f54c4d86e239852669b9aa334e584f1fe080\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2" podUID="d494c9ab-cbef-4a2a-a865-2921ec2ab9e7" Nov 24 11:34:36 crc kubenswrapper[4678]: I1124 11:34:36.045512 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7969689c84-cxm7x" podStartSLOduration=5.700495942 podStartE2EDuration="36.045485761s" podCreationTimestamp="2025-11-24 11:34:00 +0000 UTC" firstStartedPulling="2025-11-24 11:34:03.042335977 +0000 UTC m=+1053.973395616" lastFinishedPulling="2025-11-24 11:34:33.387325796 +0000 UTC m=+1084.318385435" observedRunningTime="2025-11-24 11:34:36.044294689 +0000 UTC m=+1086.975354378" watchObservedRunningTime="2025-11-24 11:34:36.045485761 +0000 UTC m=+1086.976545400" Nov 24 11:34:36 crc kubenswrapper[4678]: E1124 11:34:36.045570 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\"" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p" podUID="be206532-b60c-4047-8835-1b57d1714883" Nov 24 11:34:36 crc kubenswrapper[4678]: E1124 11:34:36.046379 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx" podUID="32d872bd-6c15-4efa-9c97-9feeebf99191" Nov 24 11:34:36 crc kubenswrapper[4678]: E1124 11:34:36.046531 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg" podUID="cf5a2355-2895-4522-b4dc-cca47eb2d33f" Nov 24 11:34:36 crc kubenswrapper[4678]: E1124 11:34:36.046657 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-d656998f4-x8n72" podUID="0fb5a95d-61ef-4850-ba59-0d637233ae88" Nov 24 11:34:36 crc kubenswrapper[4678]: E1124 11:34:36.047147 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:70cce55bcf89468c5d468ca2fc317bfc3dc5f2bef1c502df9faca2eb1293ede7\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j" podUID="1d845025-efc3-47c5-b640-59eeafc744a2" Nov 24 11:34:36 crc kubenswrapper[4678]: E1124 11:34:36.049035 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm" podUID="61e95e5c-75b3-4d08-acdd-d28fa075a707" Nov 24 11:34:36 crc kubenswrapper[4678]: I1124 11:34:36.193746 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-nxdjc" podStartSLOduration=6.413195042 podStartE2EDuration="36.193726274s" podCreationTimestamp="2025-11-24 11:34:00 +0000 UTC" firstStartedPulling="2025-11-24 11:34:02.634015495 +0000 UTC m=+1053.565075134" lastFinishedPulling="2025-11-24 11:34:32.414546687 +0000 UTC m=+1083.345606366" observedRunningTime="2025-11-24 11:34:36.168758915 +0000 UTC m=+1087.099818554" watchObservedRunningTime="2025-11-24 11:34:36.193726274 +0000 UTC m=+1087.124785913" Nov 24 11:34:36 crc kubenswrapper[4678]: I1124 11:34:36.273239 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gmrd8" podStartSLOduration=5.463913472 podStartE2EDuration="36.273221524s" podCreationTimestamp="2025-11-24 11:34:00 +0000 UTC" firstStartedPulling="2025-11-24 11:34:02.578133307 +0000 UTC m=+1053.509192946" lastFinishedPulling="2025-11-24 11:34:33.387441359 +0000 UTC m=+1084.318500998" observedRunningTime="2025-11-24 11:34:36.269874514 +0000 UTC m=+1087.200934153" watchObservedRunningTime="2025-11-24 11:34:36.273221524 +0000 UTC m=+1087.204281163" Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.046947 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-cj546" event={"ID":"4599c525-39b6-412f-b668-79c5e575c42e","Type":"ContainerStarted","Data":"e6fcab842928264c57c3af692cb907840d1b926b4a6759c90361f55694bd2994"} Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.047727 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-cj546" Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.050506 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-jk9k4" event={"ID":"e9db91a3-68e2-4500-ab6a-d1055c6e6dde","Type":"ContainerStarted","Data":"9dd0618eb98b4592f925eca5d03c5c4ae66bf6a8b894c20c49ea6cf255ce5eb9"} Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.050850 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-jk9k4" Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.055422 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj" event={"ID":"eecabfcc-62de-4512-b5e8-1685d7fd1144","Type":"ContainerStarted","Data":"6a5cb78ff204fac36e146e3d1e42aa5b604b5a5f444f6a9185e4d31630dac34c"} Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.055576 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj" Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.058705 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-q77cx" event={"ID":"d2fab4cb-dff4-439e-a97b-b35b8a2203c6","Type":"ContainerStarted","Data":"5cc82958c23fbf86cb3330efa2d2e0c84ed08ae4b48bd536b49c19d8030c533c"} Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.058846 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-q77cx" Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.074362 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-2h8fr" event={"ID":"edbe0de9-67d0-49cc-a867-3483035e3c51","Type":"ContainerStarted","Data":"91fd1edc0c92c9f0de4b35e2721fb09504b17b99a46c30ae8d3e5860b1fb4521"} Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.074915 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-2h8fr" Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.093914 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-9zvz7" event={"ID":"5fbf7159-3ac4-4387-a4e5-c9a42cc9e035","Type":"ContainerStarted","Data":"f11dc1e358673cdd21f1b2a7bcf3392fa20e311532c490303316972b3204edad"} Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.095379 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58f887965d-9zvz7" Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.096881 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-nxdjc" Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.123444 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-cj546" podStartSLOduration=4.112563237 podStartE2EDuration="36.123402618s" podCreationTimestamp="2025-11-24 11:34:01 +0000 UTC" firstStartedPulling="2025-11-24 11:34:04.234590358 +0000 UTC m=+1055.165649997" lastFinishedPulling="2025-11-24 11:34:36.245429729 +0000 UTC m=+1087.176489378" observedRunningTime="2025-11-24 11:34:37.078319739 +0000 UTC m=+1088.009379388" watchObservedRunningTime="2025-11-24 11:34:37.123402618 +0000 UTC m=+1088.054462267" Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.135999 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj" podStartSLOduration=5.40261572 podStartE2EDuration="37.135971635s" podCreationTimestamp="2025-11-24 11:34:00 +0000 UTC" firstStartedPulling="2025-11-24 11:34:04.114239413 +0000 UTC m=+1055.045299052" lastFinishedPulling="2025-11-24 11:34:35.847595328 +0000 UTC m=+1086.778654967" observedRunningTime="2025-11-24 11:34:37.112258529 +0000 UTC m=+1088.043318158" watchObservedRunningTime="2025-11-24 11:34:37.135971635 +0000 UTC m=+1088.067031264" Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.152385 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-2h8fr" podStartSLOduration=4.941027818 podStartE2EDuration="37.152362653s" podCreationTimestamp="2025-11-24 11:34:00 +0000 UTC" firstStartedPulling="2025-11-24 11:34:03.635201025 +0000 UTC m=+1054.566260664" lastFinishedPulling="2025-11-24 11:34:35.84653586 +0000 UTC m=+1086.777595499" observedRunningTime="2025-11-24 11:34:37.129293756 +0000 UTC m=+1088.060353395" watchObservedRunningTime="2025-11-24 11:34:37.152362653 +0000 UTC m=+1088.083422292" Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.155909 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-q77cx" podStartSLOduration=4.485209824 podStartE2EDuration="37.155899379s" podCreationTimestamp="2025-11-24 11:34:00 +0000 UTC" firstStartedPulling="2025-11-24 11:34:03.580483329 +0000 UTC m=+1054.511542968" lastFinishedPulling="2025-11-24 11:34:36.251172884 +0000 UTC m=+1087.182232523" observedRunningTime="2025-11-24 11:34:37.148307865 +0000 UTC m=+1088.079367504" watchObservedRunningTime="2025-11-24 11:34:37.155899379 +0000 UTC m=+1088.086959018" Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.171421 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-jk9k4" podStartSLOduration=4.831140944 podStartE2EDuration="37.171401694s" podCreationTimestamp="2025-11-24 11:34:00 +0000 UTC" firstStartedPulling="2025-11-24 11:34:03.587058284 +0000 UTC m=+1054.518117923" lastFinishedPulling="2025-11-24 11:34:35.927319034 +0000 UTC m=+1086.858378673" observedRunningTime="2025-11-24 11:34:37.165194867 +0000 UTC m=+1088.096254506" watchObservedRunningTime="2025-11-24 11:34:37.171401694 +0000 UTC m=+1088.102461333" Nov 24 11:34:37 crc kubenswrapper[4678]: I1124 11:34:37.188392 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58f887965d-9zvz7" podStartSLOduration=5.449306511 podStartE2EDuration="37.188372189s" podCreationTimestamp="2025-11-24 11:34:00 +0000 UTC" firstStartedPulling="2025-11-24 11:34:04.294037241 +0000 UTC m=+1055.225096880" lastFinishedPulling="2025-11-24 11:34:36.033102919 +0000 UTC m=+1086.964162558" observedRunningTime="2025-11-24 11:34:37.187650609 +0000 UTC m=+1088.118710258" watchObservedRunningTime="2025-11-24 11:34:37.188372189 +0000 UTC m=+1088.119431828" Nov 24 11:34:38 crc kubenswrapper[4678]: I1124 11:34:38.104308 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-7kbkq" event={"ID":"42e3cbe3-ad98-46e4-9a27-497ad6ca2026","Type":"ContainerStarted","Data":"33e152bc18baebb78d58b398e1399c3956737cea38f3e0e7afe85deff0c03d0b"} Nov 24 11:34:38 crc kubenswrapper[4678]: I1124 11:34:38.105187 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-7kbkq" Nov 24 11:34:38 crc kubenswrapper[4678]: I1124 11:34:38.106199 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-jjbs2" event={"ID":"f98bea89-6852-42c9-a69b-9867fe021eb8","Type":"ContainerStarted","Data":"dfc1ea4a0ae8d37334ffe1502a398ccdb5c2f0b22352b2d912684d21459699ec"} Nov 24 11:34:38 crc kubenswrapper[4678]: I1124 11:34:38.154337 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-jjbs2" podStartSLOduration=4.327282462 podStartE2EDuration="38.154310515s" podCreationTimestamp="2025-11-24 11:34:00 +0000 UTC" firstStartedPulling="2025-11-24 11:34:03.754311837 +0000 UTC m=+1054.685371476" lastFinishedPulling="2025-11-24 11:34:37.58133988 +0000 UTC m=+1088.512399529" observedRunningTime="2025-11-24 11:34:38.146558066 +0000 UTC m=+1089.077617705" watchObservedRunningTime="2025-11-24 11:34:38.154310515 +0000 UTC m=+1089.085370154" Nov 24 11:34:38 crc kubenswrapper[4678]: I1124 11:34:38.155861 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-7kbkq" podStartSLOduration=4.856328049 podStartE2EDuration="38.155855596s" podCreationTimestamp="2025-11-24 11:34:00 +0000 UTC" firstStartedPulling="2025-11-24 11:34:04.27945294 +0000 UTC m=+1055.210512579" lastFinishedPulling="2025-11-24 11:34:37.578980477 +0000 UTC m=+1088.510040126" observedRunningTime="2025-11-24 11:34:38.127425384 +0000 UTC m=+1089.058485023" watchObservedRunningTime="2025-11-24 11:34:38.155855596 +0000 UTC m=+1089.086915235" Nov 24 11:34:39 crc kubenswrapper[4678]: I1124 11:34:39.118388 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-vgv2l" event={"ID":"6a9d3c2c-4f10-4d08-bade-aa93ac52e7be","Type":"ContainerStarted","Data":"940e69fc6e5ed7fd430f440b601a397fce964354cfcd88d993db65df2ddcacf8"} Nov 24 11:34:39 crc kubenswrapper[4678]: I1124 11:34:39.136200 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-vgv2l" podStartSLOduration=4.693514157 podStartE2EDuration="39.136180017s" podCreationTimestamp="2025-11-24 11:34:00 +0000 UTC" firstStartedPulling="2025-11-24 11:34:03.727379156 +0000 UTC m=+1054.658438795" lastFinishedPulling="2025-11-24 11:34:38.170045016 +0000 UTC m=+1089.101104655" observedRunningTime="2025-11-24 11:34:39.13330552 +0000 UTC m=+1090.064365159" watchObservedRunningTime="2025-11-24 11:34:39.136180017 +0000 UTC m=+1090.067239656" Nov 24 11:34:40 crc kubenswrapper[4678]: I1124 11:34:40.990704 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-nxdjc" Nov 24 11:34:41 crc kubenswrapper[4678]: I1124 11:34:41.043566 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gmrd8" Nov 24 11:34:41 crc kubenswrapper[4678]: I1124 11:34:41.134927 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7969689c84-cxm7x" Nov 24 11:34:41 crc kubenswrapper[4678]: I1124 11:34:41.152496 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-jjbs2" Nov 24 11:34:41 crc kubenswrapper[4678]: I1124 11:34:41.192603 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-jk9k4" Nov 24 11:34:41 crc kubenswrapper[4678]: I1124 11:34:41.502602 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-2h8fr" Nov 24 11:34:41 crc kubenswrapper[4678]: I1124 11:34:41.539033 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-q77cx" Nov 24 11:34:41 crc kubenswrapper[4678]: I1124 11:34:41.611299 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58f887965d-9zvz7" Nov 24 11:34:41 crc kubenswrapper[4678]: I1124 11:34:41.634332 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-vgv2l" Nov 24 11:34:41 crc kubenswrapper[4678]: I1124 11:34:41.846926 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-r4sjj" Nov 24 11:34:42 crc kubenswrapper[4678]: I1124 11:34:42.016894 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-cj546" Nov 24 11:34:42 crc kubenswrapper[4678]: I1124 11:34:42.100861 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bts74" Nov 24 11:34:43 crc kubenswrapper[4678]: I1124 11:34:43.350127 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-rk24x" Nov 24 11:34:47 crc kubenswrapper[4678]: I1124 11:34:47.201970 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2" event={"ID":"d494c9ab-cbef-4a2a-a865-2921ec2ab9e7","Type":"ContainerStarted","Data":"a3a4cbcda0c3f69d2e43af35b33c2d05dd288e27a6e114d99eac187c390a1442"} Nov 24 11:34:47 crc kubenswrapper[4678]: I1124 11:34:47.202569 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2" Nov 24 11:34:47 crc kubenswrapper[4678]: I1124 11:34:47.227365 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2" podStartSLOduration=3.538674807 podStartE2EDuration="46.227340948s" podCreationTimestamp="2025-11-24 11:34:01 +0000 UTC" firstStartedPulling="2025-11-24 11:34:04.267463068 +0000 UTC m=+1055.198522707" lastFinishedPulling="2025-11-24 11:34:46.956129209 +0000 UTC m=+1097.887188848" observedRunningTime="2025-11-24 11:34:47.219041365 +0000 UTC m=+1098.150101014" watchObservedRunningTime="2025-11-24 11:34:47.227340948 +0000 UTC m=+1098.158400597" Nov 24 11:34:48 crc kubenswrapper[4678]: I1124 11:34:48.212373 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm" event={"ID":"61e95e5c-75b3-4d08-acdd-d28fa075a707","Type":"ContainerStarted","Data":"bafcf87009ecdc111f4e2c595838e085c861cbcf12b8b8a661e8dfb9520cef74"} Nov 24 11:34:48 crc kubenswrapper[4678]: I1124 11:34:48.212948 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm" Nov 24 11:34:48 crc kubenswrapper[4678]: I1124 11:34:48.229163 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm" podStartSLOduration=4.589927679 podStartE2EDuration="47.229149654s" podCreationTimestamp="2025-11-24 11:34:01 +0000 UTC" firstStartedPulling="2025-11-24 11:34:04.874173658 +0000 UTC m=+1055.805233287" lastFinishedPulling="2025-11-24 11:34:47.513395623 +0000 UTC m=+1098.444455262" observedRunningTime="2025-11-24 11:34:48.228080906 +0000 UTC m=+1099.159140555" watchObservedRunningTime="2025-11-24 11:34:48.229149654 +0000 UTC m=+1099.160209293" Nov 24 11:34:49 crc kubenswrapper[4678]: I1124 11:34:49.223375 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j" event={"ID":"1d845025-efc3-47c5-b640-59eeafc744a2","Type":"ContainerStarted","Data":"e4abae9f4a738d68b72fb33f6229bcc62a3e8c99f11b55d0159e9f836aea8ca2"} Nov 24 11:34:49 crc kubenswrapper[4678]: I1124 11:34:49.224087 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j" Nov 24 11:34:49 crc kubenswrapper[4678]: I1124 11:34:49.224876 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-npq55" event={"ID":"eff9ae6e-ce8e-4a8c-a862-4cb4e4e75560","Type":"ContainerStarted","Data":"e66cb65a304dfd9070f3e84c50d4662e734e9a5f2d62437836dc97edfd0d7241"} Nov 24 11:34:49 crc kubenswrapper[4678]: I1124 11:34:49.239832 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j" podStartSLOduration=3.322713611 podStartE2EDuration="49.239813079s" podCreationTimestamp="2025-11-24 11:34:00 +0000 UTC" firstStartedPulling="2025-11-24 11:34:02.636306556 +0000 UTC m=+1053.567366195" lastFinishedPulling="2025-11-24 11:34:48.553406024 +0000 UTC m=+1099.484465663" observedRunningTime="2025-11-24 11:34:49.238543685 +0000 UTC m=+1100.169603334" watchObservedRunningTime="2025-11-24 11:34:49.239813079 +0000 UTC m=+1100.170872718" Nov 24 11:34:49 crc kubenswrapper[4678]: I1124 11:34:49.255319 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-npq55" podStartSLOduration=4.778865463 podStartE2EDuration="48.255301614s" podCreationTimestamp="2025-11-24 11:34:01 +0000 UTC" firstStartedPulling="2025-11-24 11:34:04.927078536 +0000 UTC m=+1055.858138175" lastFinishedPulling="2025-11-24 11:34:48.403514677 +0000 UTC m=+1099.334574326" observedRunningTime="2025-11-24 11:34:49.252995092 +0000 UTC m=+1100.184054741" watchObservedRunningTime="2025-11-24 11:34:49.255301614 +0000 UTC m=+1100.186361253" Nov 24 11:34:50 crc kubenswrapper[4678]: I1124 11:34:50.235236 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx" event={"ID":"32d872bd-6c15-4efa-9c97-9feeebf99191","Type":"ContainerStarted","Data":"6f894f1bec7a4b3e76018a8b6baae36d5efa1fe9404bf3e791d067a42660bfa0"} Nov 24 11:34:50 crc kubenswrapper[4678]: I1124 11:34:50.235917 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx" Nov 24 11:34:50 crc kubenswrapper[4678]: I1124 11:34:50.264009 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx" podStartSLOduration=5.876403267 podStartE2EDuration="50.263987626s" podCreationTimestamp="2025-11-24 11:34:00 +0000 UTC" firstStartedPulling="2025-11-24 11:34:05.008860747 +0000 UTC m=+1055.939920396" lastFinishedPulling="2025-11-24 11:34:49.396445116 +0000 UTC m=+1100.327504755" observedRunningTime="2025-11-24 11:34:50.261361325 +0000 UTC m=+1101.192420964" watchObservedRunningTime="2025-11-24 11:34:50.263987626 +0000 UTC m=+1101.195047265" Nov 24 11:34:51 crc kubenswrapper[4678]: I1124 11:34:51.155129 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-jjbs2" Nov 24 11:34:51 crc kubenswrapper[4678]: I1124 11:34:51.270144 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg" event={"ID":"cf5a2355-2895-4522-b4dc-cca47eb2d33f","Type":"ContainerStarted","Data":"c87abac5581532ece2c2b78460a365acaf5011e11dd3a129640e9bbaea57281c"} Nov 24 11:34:51 crc kubenswrapper[4678]: I1124 11:34:51.271415 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg" Nov 24 11:34:51 crc kubenswrapper[4678]: I1124 11:34:51.295090 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg" podStartSLOduration=5.690927976 podStartE2EDuration="51.295065477s" podCreationTimestamp="2025-11-24 11:34:00 +0000 UTC" firstStartedPulling="2025-11-24 11:34:04.751306025 +0000 UTC m=+1055.682365664" lastFinishedPulling="2025-11-24 11:34:50.355443526 +0000 UTC m=+1101.286503165" observedRunningTime="2025-11-24 11:34:51.288709517 +0000 UTC m=+1102.219769176" watchObservedRunningTime="2025-11-24 11:34:51.295065477 +0000 UTC m=+1102.226125116" Nov 24 11:34:51 crc kubenswrapper[4678]: I1124 11:34:51.636903 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-vgv2l" Nov 24 11:34:51 crc kubenswrapper[4678]: I1124 11:34:51.682333 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-7kbkq" Nov 24 11:34:52 crc kubenswrapper[4678]: I1124 11:34:52.064764 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7d86657865-d4wl2" Nov 24 11:34:52 crc kubenswrapper[4678]: I1124 11:34:52.125360 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-5q2rm" Nov 24 11:34:52 crc kubenswrapper[4678]: I1124 11:34:52.280207 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p" event={"ID":"be206532-b60c-4047-8835-1b57d1714883","Type":"ContainerStarted","Data":"5ca270db59d1ede291a0d498ee70900d109032dd9476c9a8b2a127171b45d7e9"} Nov 24 11:34:52 crc kubenswrapper[4678]: I1124 11:34:52.280439 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p" Nov 24 11:34:52 crc kubenswrapper[4678]: I1124 11:34:52.305110 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p" podStartSLOduration=5.875313166 podStartE2EDuration="52.305091433s" podCreationTimestamp="2025-11-24 11:34:00 +0000 UTC" firstStartedPulling="2025-11-24 11:34:04.874063655 +0000 UTC m=+1055.805123294" lastFinishedPulling="2025-11-24 11:34:51.303841922 +0000 UTC m=+1102.234901561" observedRunningTime="2025-11-24 11:34:52.299636868 +0000 UTC m=+1103.230696517" watchObservedRunningTime="2025-11-24 11:34:52.305091433 +0000 UTC m=+1103.236151072" Nov 24 11:34:53 crc kubenswrapper[4678]: I1124 11:34:53.288212 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-x8n72" event={"ID":"0fb5a95d-61ef-4850-ba59-0d637233ae88","Type":"ContainerStarted","Data":"f27a7389ce0a3086c79abc8b07c73d630b92aa79b32587a7b2e29ac2928043ec"} Nov 24 11:34:53 crc kubenswrapper[4678]: I1124 11:34:53.290028 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d656998f4-x8n72" Nov 24 11:34:53 crc kubenswrapper[4678]: I1124 11:34:53.307377 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d656998f4-x8n72" podStartSLOduration=4.74111568 podStartE2EDuration="52.307359453s" podCreationTimestamp="2025-11-24 11:34:01 +0000 UTC" firstStartedPulling="2025-11-24 11:34:05.008392174 +0000 UTC m=+1055.939451813" lastFinishedPulling="2025-11-24 11:34:52.574635937 +0000 UTC m=+1103.505695586" observedRunningTime="2025-11-24 11:34:53.304390954 +0000 UTC m=+1104.235450603" watchObservedRunningTime="2025-11-24 11:34:53.307359453 +0000 UTC m=+1104.238419092" Nov 24 11:35:00 crc kubenswrapper[4678]: I1124 11:35:00.296366 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:35:00 crc kubenswrapper[4678]: I1124 11:35:00.298441 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:35:00 crc kubenswrapper[4678]: I1124 11:35:00.298588 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:35:00 crc kubenswrapper[4678]: I1124 11:35:00.299500 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae5ad808ee433867f6ed22b16c3cabcd9999e49e8fb7ad6c2494c4e5839c237e"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:35:00 crc kubenswrapper[4678]: I1124 11:35:00.299682 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://ae5ad808ee433867f6ed22b16c3cabcd9999e49e8fb7ad6c2494c4e5839c237e" gracePeriod=600 Nov 24 11:35:00 crc kubenswrapper[4678]: I1124 11:35:00.969152 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-xlx8j" Nov 24 11:35:01 crc kubenswrapper[4678]: I1124 11:35:01.371767 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="ae5ad808ee433867f6ed22b16c3cabcd9999e49e8fb7ad6c2494c4e5839c237e" exitCode=0 Nov 24 11:35:01 crc kubenswrapper[4678]: I1124 11:35:01.372166 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"ae5ad808ee433867f6ed22b16c3cabcd9999e49e8fb7ad6c2494c4e5839c237e"} Nov 24 11:35:01 crc kubenswrapper[4678]: I1124 11:35:01.372194 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"dd5ea218f678046a66e5b35e3df6bfeb83c4a006c488a84e5029cd1536ff6717"} Nov 24 11:35:01 crc kubenswrapper[4678]: I1124 11:35:01.372210 4678 scope.go:117] "RemoveContainer" containerID="1197580eb03eaddc7b9dc08dbab8ba6891f416c80d33f4fc3fc03e3113ad80b4" Nov 24 11:35:01 crc kubenswrapper[4678]: I1124 11:35:01.741949 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-q6kcx" Nov 24 11:35:01 crc kubenswrapper[4678]: I1124 11:35:01.784792 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-wvz4p" Nov 24 11:35:01 crc kubenswrapper[4678]: I1124 11:35:01.837288 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-q6dxg" Nov 24 11:35:02 crc kubenswrapper[4678]: I1124 11:35:02.047336 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d656998f4-x8n72" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.125959 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c9lwd"] Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.128572 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c9lwd" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.135256 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c9lwd"] Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.138861 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-trh4r" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.139092 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.139205 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.139357 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.236966 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nn2mw"] Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.238567 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nn2mw" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.243080 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.270978 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nn2mw"] Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.307781 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm4p2\" (UniqueName: \"kubernetes.io/projected/a28ca887-c236-4d83-b986-b24cebcad30f-kube-api-access-rm4p2\") pod \"dnsmasq-dns-675f4bcbfc-c9lwd\" (UID: \"a28ca887-c236-4d83-b986-b24cebcad30f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c9lwd" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.307834 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28ca887-c236-4d83-b986-b24cebcad30f-config\") pod \"dnsmasq-dns-675f4bcbfc-c9lwd\" (UID: \"a28ca887-c236-4d83-b986-b24cebcad30f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c9lwd" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.409891 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-config\") pod \"dnsmasq-dns-78dd6ddcc-nn2mw\" (UID: \"843fa9b2-c463-4aec-9aa9-4bb76febbdf3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nn2mw" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.409956 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-nn2mw\" (UID: \"843fa9b2-c463-4aec-9aa9-4bb76febbdf3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nn2mw" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.410011 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm4p2\" (UniqueName: \"kubernetes.io/projected/a28ca887-c236-4d83-b986-b24cebcad30f-kube-api-access-rm4p2\") pod \"dnsmasq-dns-675f4bcbfc-c9lwd\" (UID: \"a28ca887-c236-4d83-b986-b24cebcad30f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c9lwd" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.410048 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28ca887-c236-4d83-b986-b24cebcad30f-config\") pod \"dnsmasq-dns-675f4bcbfc-c9lwd\" (UID: \"a28ca887-c236-4d83-b986-b24cebcad30f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c9lwd" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.410128 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2cmd\" (UniqueName: \"kubernetes.io/projected/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-kube-api-access-x2cmd\") pod \"dnsmasq-dns-78dd6ddcc-nn2mw\" (UID: \"843fa9b2-c463-4aec-9aa9-4bb76febbdf3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nn2mw" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.411425 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28ca887-c236-4d83-b986-b24cebcad30f-config\") pod \"dnsmasq-dns-675f4bcbfc-c9lwd\" (UID: \"a28ca887-c236-4d83-b986-b24cebcad30f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c9lwd" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.436806 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm4p2\" (UniqueName: \"kubernetes.io/projected/a28ca887-c236-4d83-b986-b24cebcad30f-kube-api-access-rm4p2\") pod \"dnsmasq-dns-675f4bcbfc-c9lwd\" (UID: \"a28ca887-c236-4d83-b986-b24cebcad30f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c9lwd" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.458583 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c9lwd" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.511425 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2cmd\" (UniqueName: \"kubernetes.io/projected/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-kube-api-access-x2cmd\") pod \"dnsmasq-dns-78dd6ddcc-nn2mw\" (UID: \"843fa9b2-c463-4aec-9aa9-4bb76febbdf3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nn2mw" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.511891 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-config\") pod \"dnsmasq-dns-78dd6ddcc-nn2mw\" (UID: \"843fa9b2-c463-4aec-9aa9-4bb76febbdf3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nn2mw" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.512959 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-config\") pod \"dnsmasq-dns-78dd6ddcc-nn2mw\" (UID: \"843fa9b2-c463-4aec-9aa9-4bb76febbdf3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nn2mw" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.514310 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-nn2mw\" (UID: \"843fa9b2-c463-4aec-9aa9-4bb76febbdf3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nn2mw" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.514954 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-nn2mw\" (UID: \"843fa9b2-c463-4aec-9aa9-4bb76febbdf3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nn2mw" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.542696 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2cmd\" (UniqueName: \"kubernetes.io/projected/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-kube-api-access-x2cmd\") pod \"dnsmasq-dns-78dd6ddcc-nn2mw\" (UID: \"843fa9b2-c463-4aec-9aa9-4bb76febbdf3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nn2mw" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.566148 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nn2mw" Nov 24 11:35:18 crc kubenswrapper[4678]: I1124 11:35:18.971746 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c9lwd"] Nov 24 11:35:19 crc kubenswrapper[4678]: W1124 11:35:19.137694 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod843fa9b2_c463_4aec_9aa9_4bb76febbdf3.slice/crio-af6ef42c4737d85f5e4bba1fb40cab68c823f66df47d1e18c3314ad8ed29b961 WatchSource:0}: Error finding container af6ef42c4737d85f5e4bba1fb40cab68c823f66df47d1e18c3314ad8ed29b961: Status 404 returned error can't find the container with id af6ef42c4737d85f5e4bba1fb40cab68c823f66df47d1e18c3314ad8ed29b961 Nov 24 11:35:19 crc kubenswrapper[4678]: I1124 11:35:19.143438 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nn2mw"] Nov 24 11:35:19 crc kubenswrapper[4678]: I1124 11:35:19.580925 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-c9lwd" event={"ID":"a28ca887-c236-4d83-b986-b24cebcad30f","Type":"ContainerStarted","Data":"9814b81efadc408d9d8720a2a6509cc3c1a6e6a5f8d18f6877fdca4499a7eedf"} Nov 24 11:35:19 crc kubenswrapper[4678]: I1124 11:35:19.582261 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-nn2mw" event={"ID":"843fa9b2-c463-4aec-9aa9-4bb76febbdf3","Type":"ContainerStarted","Data":"af6ef42c4737d85f5e4bba1fb40cab68c823f66df47d1e18c3314ad8ed29b961"} Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.412175 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c9lwd"] Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.439780 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gkz44"] Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.441347 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gkz44" Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.493286 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gkz44"] Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.588443 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/552a3202-f209-4a9f-9ea9-da67d793daaa-config\") pod \"dnsmasq-dns-666b6646f7-gkz44\" (UID: \"552a3202-f209-4a9f-9ea9-da67d793daaa\") " pod="openstack/dnsmasq-dns-666b6646f7-gkz44" Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.588515 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2ffx\" (UniqueName: \"kubernetes.io/projected/552a3202-f209-4a9f-9ea9-da67d793daaa-kube-api-access-x2ffx\") pod \"dnsmasq-dns-666b6646f7-gkz44\" (UID: \"552a3202-f209-4a9f-9ea9-da67d793daaa\") " pod="openstack/dnsmasq-dns-666b6646f7-gkz44" Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.588630 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/552a3202-f209-4a9f-9ea9-da67d793daaa-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gkz44\" (UID: \"552a3202-f209-4a9f-9ea9-da67d793daaa\") " pod="openstack/dnsmasq-dns-666b6646f7-gkz44" Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.696759 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/552a3202-f209-4a9f-9ea9-da67d793daaa-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gkz44\" (UID: \"552a3202-f209-4a9f-9ea9-da67d793daaa\") " pod="openstack/dnsmasq-dns-666b6646f7-gkz44" Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.696937 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/552a3202-f209-4a9f-9ea9-da67d793daaa-config\") pod \"dnsmasq-dns-666b6646f7-gkz44\" (UID: \"552a3202-f209-4a9f-9ea9-da67d793daaa\") " pod="openstack/dnsmasq-dns-666b6646f7-gkz44" Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.696989 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2ffx\" (UniqueName: \"kubernetes.io/projected/552a3202-f209-4a9f-9ea9-da67d793daaa-kube-api-access-x2ffx\") pod \"dnsmasq-dns-666b6646f7-gkz44\" (UID: \"552a3202-f209-4a9f-9ea9-da67d793daaa\") " pod="openstack/dnsmasq-dns-666b6646f7-gkz44" Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.698537 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/552a3202-f209-4a9f-9ea9-da67d793daaa-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gkz44\" (UID: \"552a3202-f209-4a9f-9ea9-da67d793daaa\") " pod="openstack/dnsmasq-dns-666b6646f7-gkz44" Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.698970 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/552a3202-f209-4a9f-9ea9-da67d793daaa-config\") pod \"dnsmasq-dns-666b6646f7-gkz44\" (UID: \"552a3202-f209-4a9f-9ea9-da67d793daaa\") " pod="openstack/dnsmasq-dns-666b6646f7-gkz44" Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.740109 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2ffx\" (UniqueName: \"kubernetes.io/projected/552a3202-f209-4a9f-9ea9-da67d793daaa-kube-api-access-x2ffx\") pod \"dnsmasq-dns-666b6646f7-gkz44\" (UID: \"552a3202-f209-4a9f-9ea9-da67d793daaa\") " pod="openstack/dnsmasq-dns-666b6646f7-gkz44" Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.776489 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nn2mw"] Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.789783 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gkz44" Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.847156 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xtq8d"] Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.868806 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.920584 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcc92e56-646f-4646-817a-cea16263dc09-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xtq8d\" (UID: \"fcc92e56-646f-4646-817a-cea16263dc09\") " pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.920649 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68rq9\" (UniqueName: \"kubernetes.io/projected/fcc92e56-646f-4646-817a-cea16263dc09-kube-api-access-68rq9\") pod \"dnsmasq-dns-57d769cc4f-xtq8d\" (UID: \"fcc92e56-646f-4646-817a-cea16263dc09\") " pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.920703 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc92e56-646f-4646-817a-cea16263dc09-config\") pod \"dnsmasq-dns-57d769cc4f-xtq8d\" (UID: \"fcc92e56-646f-4646-817a-cea16263dc09\") " pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" Nov 24 11:35:21 crc kubenswrapper[4678]: I1124 11:35:21.927640 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xtq8d"] Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.022767 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68rq9\" (UniqueName: \"kubernetes.io/projected/fcc92e56-646f-4646-817a-cea16263dc09-kube-api-access-68rq9\") pod \"dnsmasq-dns-57d769cc4f-xtq8d\" (UID: \"fcc92e56-646f-4646-817a-cea16263dc09\") " pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.022811 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcc92e56-646f-4646-817a-cea16263dc09-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xtq8d\" (UID: \"fcc92e56-646f-4646-817a-cea16263dc09\") " pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.022837 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc92e56-646f-4646-817a-cea16263dc09-config\") pod \"dnsmasq-dns-57d769cc4f-xtq8d\" (UID: \"fcc92e56-646f-4646-817a-cea16263dc09\") " pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.023745 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc92e56-646f-4646-817a-cea16263dc09-config\") pod \"dnsmasq-dns-57d769cc4f-xtq8d\" (UID: \"fcc92e56-646f-4646-817a-cea16263dc09\") " pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.023747 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcc92e56-646f-4646-817a-cea16263dc09-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xtq8d\" (UID: \"fcc92e56-646f-4646-817a-cea16263dc09\") " pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.059082 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68rq9\" (UniqueName: \"kubernetes.io/projected/fcc92e56-646f-4646-817a-cea16263dc09-kube-api-access-68rq9\") pod \"dnsmasq-dns-57d769cc4f-xtq8d\" (UID: \"fcc92e56-646f-4646-817a-cea16263dc09\") " pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.259710 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.514078 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gkz44"] Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.566423 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.571877 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.579086 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.579288 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.579390 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.579563 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.579564 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.579588 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.579966 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-srnh8" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.587439 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.617232 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gkz44" event={"ID":"552a3202-f209-4a9f-9ea9-da67d793daaa","Type":"ContainerStarted","Data":"732bebe2256f8615bae89a9b6779cf3ad70a0c0066791e36bda957e2737f531a"} Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.744327 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.744401 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.744486 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/728e8f13-52c5-4b48-9fff-8053732311b9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.744512 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-config-data\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.744541 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k96n\" (UniqueName: \"kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-kube-api-access-7k96n\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.744606 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/728e8f13-52c5-4b48-9fff-8053732311b9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.744636 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.744659 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.744714 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.744738 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.744903 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.814812 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xtq8d"] Nov 24 11:35:22 crc kubenswrapper[4678]: W1124 11:35:22.842399 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcc92e56_646f_4646_817a_cea16263dc09.slice/crio-74cbae662eba59586f80a5f83d9737686777ab49f91fce7e7dc5a4d930c91b3a WatchSource:0}: Error finding container 74cbae662eba59586f80a5f83d9737686777ab49f91fce7e7dc5a4d930c91b3a: Status 404 returned error can't find the container with id 74cbae662eba59586f80a5f83d9737686777ab49f91fce7e7dc5a4d930c91b3a Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.847397 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/728e8f13-52c5-4b48-9fff-8053732311b9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.847466 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-config-data\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.847498 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k96n\" (UniqueName: \"kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-kube-api-access-7k96n\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.847559 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/728e8f13-52c5-4b48-9fff-8053732311b9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.847580 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.847603 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.847632 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.847657 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.847712 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.847760 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.847806 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.849382 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-config-data\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.849446 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.850602 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.852480 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.856288 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.856527 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.856828 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.856970 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.858319 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/728e8f13-52c5-4b48-9fff-8053732311b9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.866934 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/728e8f13-52c5-4b48-9fff-8053732311b9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.873591 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k96n\" (UniqueName: \"kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-kube-api-access-7k96n\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.900977 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.904803 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.921106 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.933765 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.936117 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wmvgb" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.937217 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.937393 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.937641 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.938017 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.938099 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.938933 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 11:35:22 crc kubenswrapper[4678]: I1124 11:35:22.945039 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.050989 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.051076 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.051108 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.051149 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.051239 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.051256 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.051283 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.051308 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.051338 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.051356 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfd85\" (UniqueName: \"kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-kube-api-access-wfd85\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.051400 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.152892 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.152941 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.152986 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.153026 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.153051 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.153077 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfd85\" (UniqueName: \"kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-kube-api-access-wfd85\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.153107 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.153159 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.153206 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.153240 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.153285 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.154126 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.154202 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.154914 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.155028 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.155372 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.159821 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.160861 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.177154 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.181167 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.184341 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfd85\" (UniqueName: \"kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-kube-api-access-wfd85\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.194550 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.219072 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.288301 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.534622 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:35:23 crc kubenswrapper[4678]: I1124 11:35:23.629570 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" event={"ID":"fcc92e56-646f-4646-817a-cea16263dc09","Type":"ContainerStarted","Data":"74cbae662eba59586f80a5f83d9737686777ab49f91fce7e7dc5a4d930c91b3a"} Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.180329 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.182381 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.185556 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-klm5f" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.185797 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.186278 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.186410 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.193337 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.250239 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.293542 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtpx6\" (UniqueName: \"kubernetes.io/projected/8f4675f4-74be-4f56-a3a6-d7e6aea34614-kube-api-access-qtpx6\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.293631 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f4675f4-74be-4f56-a3a6-d7e6aea34614-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.293686 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4675f4-74be-4f56-a3a6-d7e6aea34614-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.293723 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4675f4-74be-4f56-a3a6-d7e6aea34614-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.293753 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8f4675f4-74be-4f56-a3a6-d7e6aea34614-config-data-default\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.293786 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.293858 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8f4675f4-74be-4f56-a3a6-d7e6aea34614-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.293898 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f4675f4-74be-4f56-a3a6-d7e6aea34614-kolla-config\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.395274 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f4675f4-74be-4f56-a3a6-d7e6aea34614-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.395330 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4675f4-74be-4f56-a3a6-d7e6aea34614-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.395367 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4675f4-74be-4f56-a3a6-d7e6aea34614-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.395388 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8f4675f4-74be-4f56-a3a6-d7e6aea34614-config-data-default\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.395415 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.395459 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8f4675f4-74be-4f56-a3a6-d7e6aea34614-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.395495 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f4675f4-74be-4f56-a3a6-d7e6aea34614-kolla-config\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.395540 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtpx6\" (UniqueName: \"kubernetes.io/projected/8f4675f4-74be-4f56-a3a6-d7e6aea34614-kube-api-access-qtpx6\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.396426 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.396604 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8f4675f4-74be-4f56-a3a6-d7e6aea34614-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.396884 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8f4675f4-74be-4f56-a3a6-d7e6aea34614-config-data-default\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.397352 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4675f4-74be-4f56-a3a6-d7e6aea34614-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.398895 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f4675f4-74be-4f56-a3a6-d7e6aea34614-kolla-config\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.403233 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f4675f4-74be-4f56-a3a6-d7e6aea34614-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.405143 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4675f4-74be-4f56-a3a6-d7e6aea34614-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.428685 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtpx6\" (UniqueName: \"kubernetes.io/projected/8f4675f4-74be-4f56-a3a6-d7e6aea34614-kube-api-access-qtpx6\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.487555 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"8f4675f4-74be-4f56-a3a6-d7e6aea34614\") " pod="openstack/openstack-galera-0" Nov 24 11:35:24 crc kubenswrapper[4678]: I1124 11:35:24.539784 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.619558 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.623563 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.635132 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.665749 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-29ql5" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.666088 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.666323 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.672511 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.731847 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25fc6cbb-a91d-4c54-9736-5684da015680-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.731923 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25fc6cbb-a91d-4c54-9736-5684da015680-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.731983 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25fc6cbb-a91d-4c54-9736-5684da015680-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.732038 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.732084 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fc6cbb-a91d-4c54-9736-5684da015680-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.732157 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv698\" (UniqueName: \"kubernetes.io/projected/25fc6cbb-a91d-4c54-9736-5684da015680-kube-api-access-vv698\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.732226 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25fc6cbb-a91d-4c54-9736-5684da015680-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.732308 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25fc6cbb-a91d-4c54-9736-5684da015680-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.807919 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.814078 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.820818 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.821444 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-m6ts2" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.821602 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.828574 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.835129 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25fc6cbb-a91d-4c54-9736-5684da015680-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.835205 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25fc6cbb-a91d-4c54-9736-5684da015680-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.835265 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/559dccdf-14d1-43da-9acf-ddc0ae3fef0a-kolla-config\") pod \"memcached-0\" (UID: \"559dccdf-14d1-43da-9acf-ddc0ae3fef0a\") " pod="openstack/memcached-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.835312 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25fc6cbb-a91d-4c54-9736-5684da015680-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.835339 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25fc6cbb-a91d-4c54-9736-5684da015680-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.835362 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/559dccdf-14d1-43da-9acf-ddc0ae3fef0a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"559dccdf-14d1-43da-9acf-ddc0ae3fef0a\") " pod="openstack/memcached-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.835379 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25fc6cbb-a91d-4c54-9736-5684da015680-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.835407 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.835438 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/559dccdf-14d1-43da-9acf-ddc0ae3fef0a-config-data\") pod \"memcached-0\" (UID: \"559dccdf-14d1-43da-9acf-ddc0ae3fef0a\") " pod="openstack/memcached-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.835458 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fc6cbb-a91d-4c54-9736-5684da015680-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.835478 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drkjg\" (UniqueName: \"kubernetes.io/projected/559dccdf-14d1-43da-9acf-ddc0ae3fef0a-kube-api-access-drkjg\") pod \"memcached-0\" (UID: \"559dccdf-14d1-43da-9acf-ddc0ae3fef0a\") " pod="openstack/memcached-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.835514 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv698\" (UniqueName: \"kubernetes.io/projected/25fc6cbb-a91d-4c54-9736-5684da015680-kube-api-access-vv698\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.835541 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559dccdf-14d1-43da-9acf-ddc0ae3fef0a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"559dccdf-14d1-43da-9acf-ddc0ae3fef0a\") " pod="openstack/memcached-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.850333 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25fc6cbb-a91d-4c54-9736-5684da015680-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.851152 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25fc6cbb-a91d-4c54-9736-5684da015680-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.851261 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25fc6cbb-a91d-4c54-9736-5684da015680-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.851607 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.861423 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25fc6cbb-a91d-4c54-9736-5684da015680-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.874843 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fc6cbb-a91d-4c54-9736-5684da015680-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.878434 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25fc6cbb-a91d-4c54-9736-5684da015680-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.899479 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv698\" (UniqueName: \"kubernetes.io/projected/25fc6cbb-a91d-4c54-9736-5684da015680-kube-api-access-vv698\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.907340 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"25fc6cbb-a91d-4c54-9736-5684da015680\") " pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.936939 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/559dccdf-14d1-43da-9acf-ddc0ae3fef0a-kolla-config\") pod \"memcached-0\" (UID: \"559dccdf-14d1-43da-9acf-ddc0ae3fef0a\") " pod="openstack/memcached-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.937024 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/559dccdf-14d1-43da-9acf-ddc0ae3fef0a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"559dccdf-14d1-43da-9acf-ddc0ae3fef0a\") " pod="openstack/memcached-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.937105 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/559dccdf-14d1-43da-9acf-ddc0ae3fef0a-config-data\") pod \"memcached-0\" (UID: \"559dccdf-14d1-43da-9acf-ddc0ae3fef0a\") " pod="openstack/memcached-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.937124 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drkjg\" (UniqueName: \"kubernetes.io/projected/559dccdf-14d1-43da-9acf-ddc0ae3fef0a-kube-api-access-drkjg\") pod \"memcached-0\" (UID: \"559dccdf-14d1-43da-9acf-ddc0ae3fef0a\") " pod="openstack/memcached-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.937185 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559dccdf-14d1-43da-9acf-ddc0ae3fef0a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"559dccdf-14d1-43da-9acf-ddc0ae3fef0a\") " pod="openstack/memcached-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.939649 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/559dccdf-14d1-43da-9acf-ddc0ae3fef0a-config-data\") pod \"memcached-0\" (UID: \"559dccdf-14d1-43da-9acf-ddc0ae3fef0a\") " pod="openstack/memcached-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.941024 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/559dccdf-14d1-43da-9acf-ddc0ae3fef0a-kolla-config\") pod \"memcached-0\" (UID: \"559dccdf-14d1-43da-9acf-ddc0ae3fef0a\") " pod="openstack/memcached-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.941463 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559dccdf-14d1-43da-9acf-ddc0ae3fef0a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"559dccdf-14d1-43da-9acf-ddc0ae3fef0a\") " pod="openstack/memcached-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.944294 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/559dccdf-14d1-43da-9acf-ddc0ae3fef0a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"559dccdf-14d1-43da-9acf-ddc0ae3fef0a\") " pod="openstack/memcached-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.970373 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drkjg\" (UniqueName: \"kubernetes.io/projected/559dccdf-14d1-43da-9acf-ddc0ae3fef0a-kube-api-access-drkjg\") pod \"memcached-0\" (UID: \"559dccdf-14d1-43da-9acf-ddc0ae3fef0a\") " pod="openstack/memcached-0" Nov 24 11:35:25 crc kubenswrapper[4678]: I1124 11:35:25.979922 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 11:35:26 crc kubenswrapper[4678]: I1124 11:35:26.018883 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 11:35:27 crc kubenswrapper[4678]: W1124 11:35:27.614608 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod728e8f13_52c5_4b48_9fff_8053732311b9.slice/crio-f8306af5af9b53e6fb9823ee55cf4f8752b13c03fd2aa4451658bc979e213b5f WatchSource:0}: Error finding container f8306af5af9b53e6fb9823ee55cf4f8752b13c03fd2aa4451658bc979e213b5f: Status 404 returned error can't find the container with id f8306af5af9b53e6fb9823ee55cf4f8752b13c03fd2aa4451658bc979e213b5f Nov 24 11:35:27 crc kubenswrapper[4678]: I1124 11:35:27.679746 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"728e8f13-52c5-4b48-9fff-8053732311b9","Type":"ContainerStarted","Data":"f8306af5af9b53e6fb9823ee55cf4f8752b13c03fd2aa4451658bc979e213b5f"} Nov 24 11:35:27 crc kubenswrapper[4678]: I1124 11:35:27.753175 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:35:27 crc kubenswrapper[4678]: I1124 11:35:27.754546 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:35:27 crc kubenswrapper[4678]: I1124 11:35:27.764906 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-pc8c2" Nov 24 11:35:27 crc kubenswrapper[4678]: I1124 11:35:27.773083 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmm6b\" (UniqueName: \"kubernetes.io/projected/ddc6efef-042b-489a-a545-669ec3783e86-kube-api-access-lmm6b\") pod \"kube-state-metrics-0\" (UID: \"ddc6efef-042b-489a-a545-669ec3783e86\") " pod="openstack/kube-state-metrics-0" Nov 24 11:35:27 crc kubenswrapper[4678]: I1124 11:35:27.784547 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:35:27 crc kubenswrapper[4678]: I1124 11:35:27.879645 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmm6b\" (UniqueName: \"kubernetes.io/projected/ddc6efef-042b-489a-a545-669ec3783e86-kube-api-access-lmm6b\") pod \"kube-state-metrics-0\" (UID: \"ddc6efef-042b-489a-a545-669ec3783e86\") " pod="openstack/kube-state-metrics-0" Nov 24 11:35:27 crc kubenswrapper[4678]: I1124 11:35:27.934940 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmm6b\" (UniqueName: \"kubernetes.io/projected/ddc6efef-042b-489a-a545-669ec3783e86-kube-api-access-lmm6b\") pod \"kube-state-metrics-0\" (UID: \"ddc6efef-042b-489a-a545-669ec3783e86\") " pod="openstack/kube-state-metrics-0" Nov 24 11:35:28 crc kubenswrapper[4678]: I1124 11:35:28.117301 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:35:28 crc kubenswrapper[4678]: I1124 11:35:28.654682 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-sj4wp"] Nov 24 11:35:28 crc kubenswrapper[4678]: I1124 11:35:28.655964 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-sj4wp" Nov 24 11:35:28 crc kubenswrapper[4678]: I1124 11:35:28.678013 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Nov 24 11:35:28 crc kubenswrapper[4678]: I1124 11:35:28.678240 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-mld5l" Nov 24 11:35:28 crc kubenswrapper[4678]: I1124 11:35:28.701142 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-sj4wp"] Nov 24 11:35:28 crc kubenswrapper[4678]: I1124 11:35:28.805805 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78276\" (UniqueName: \"kubernetes.io/projected/2a26fe34-6696-484e-aba7-bf8eb21ff389-kube-api-access-78276\") pod \"observability-ui-dashboards-7d5fb4cbfb-sj4wp\" (UID: \"2a26fe34-6696-484e-aba7-bf8eb21ff389\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-sj4wp" Nov 24 11:35:28 crc kubenswrapper[4678]: I1124 11:35:28.805916 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a26fe34-6696-484e-aba7-bf8eb21ff389-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-sj4wp\" (UID: \"2a26fe34-6696-484e-aba7-bf8eb21ff389\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-sj4wp" Nov 24 11:35:28 crc kubenswrapper[4678]: I1124 11:35:28.907507 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a26fe34-6696-484e-aba7-bf8eb21ff389-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-sj4wp\" (UID: \"2a26fe34-6696-484e-aba7-bf8eb21ff389\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-sj4wp" Nov 24 11:35:28 crc kubenswrapper[4678]: I1124 11:35:28.907641 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78276\" (UniqueName: \"kubernetes.io/projected/2a26fe34-6696-484e-aba7-bf8eb21ff389-kube-api-access-78276\") pod \"observability-ui-dashboards-7d5fb4cbfb-sj4wp\" (UID: \"2a26fe34-6696-484e-aba7-bf8eb21ff389\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-sj4wp" Nov 24 11:35:28 crc kubenswrapper[4678]: E1124 11:35:28.908153 4678 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Nov 24 11:35:28 crc kubenswrapper[4678]: E1124 11:35:28.908233 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a26fe34-6696-484e-aba7-bf8eb21ff389-serving-cert podName:2a26fe34-6696-484e-aba7-bf8eb21ff389 nodeName:}" failed. No retries permitted until 2025-11-24 11:35:29.408213837 +0000 UTC m=+1140.339273476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2a26fe34-6696-484e-aba7-bf8eb21ff389-serving-cert") pod "observability-ui-dashboards-7d5fb4cbfb-sj4wp" (UID: "2a26fe34-6696-484e-aba7-bf8eb21ff389") : secret "observability-ui-dashboards" not found Nov 24 11:35:28 crc kubenswrapper[4678]: I1124 11:35:28.929584 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78276\" (UniqueName: \"kubernetes.io/projected/2a26fe34-6696-484e-aba7-bf8eb21ff389-kube-api-access-78276\") pod \"observability-ui-dashboards-7d5fb4cbfb-sj4wp\" (UID: \"2a26fe34-6696-484e-aba7-bf8eb21ff389\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-sj4wp" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.222498 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-85477fb56d-hhgq2"] Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.223886 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.242560 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.247346 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.262306 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.276527 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.282870 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.283046 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.283417 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-vq5ht" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.300088 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.317425 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.327843 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e74f0139-c991-4537-be74-c7b3379389cd-console-oauth-config\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.327905 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f794d99b-6371-445e-9bb9-74f0bdbee6bc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.327946 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f794d99b-6371-445e-9bb9-74f0bdbee6bc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.327975 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e74f0139-c991-4537-be74-c7b3379389cd-trusted-ca-bundle\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.327999 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e74f0139-c991-4537-be74-c7b3379389cd-oauth-serving-cert\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.328025 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsnrd\" (UniqueName: \"kubernetes.io/projected/f794d99b-6371-445e-9bb9-74f0bdbee6bc-kube-api-access-nsnrd\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.328051 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.328082 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e74f0139-c991-4537-be74-c7b3379389cd-console-config\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.328151 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-config\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.328178 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f794d99b-6371-445e-9bb9-74f0bdbee6bc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.328198 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snr4v\" (UniqueName: \"kubernetes.io/projected/e74f0139-c991-4537-be74-c7b3379389cd-kube-api-access-snr4v\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.328230 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.328255 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e74f0139-c991-4537-be74-c7b3379389cd-service-ca\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.328292 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e74f0139-c991-4537-be74-c7b3379389cd-console-serving-cert\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.328313 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.386399 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85477fb56d-hhgq2"] Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.430107 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e74f0139-c991-4537-be74-c7b3379389cd-console-config\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.430233 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-config\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.430260 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f794d99b-6371-445e-9bb9-74f0bdbee6bc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.430283 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snr4v\" (UniqueName: \"kubernetes.io/projected/e74f0139-c991-4537-be74-c7b3379389cd-kube-api-access-snr4v\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.430318 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.430338 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e74f0139-c991-4537-be74-c7b3379389cd-service-ca\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.430383 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a26fe34-6696-484e-aba7-bf8eb21ff389-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-sj4wp\" (UID: \"2a26fe34-6696-484e-aba7-bf8eb21ff389\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-sj4wp" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.430407 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e74f0139-c991-4537-be74-c7b3379389cd-console-serving-cert\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.430428 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.430473 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e74f0139-c991-4537-be74-c7b3379389cd-console-oauth-config\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.430495 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f794d99b-6371-445e-9bb9-74f0bdbee6bc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.430531 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f794d99b-6371-445e-9bb9-74f0bdbee6bc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.430551 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e74f0139-c991-4537-be74-c7b3379389cd-trusted-ca-bundle\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.430567 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e74f0139-c991-4537-be74-c7b3379389cd-oauth-serving-cert\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.430584 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsnrd\" (UniqueName: \"kubernetes.io/projected/f794d99b-6371-445e-9bb9-74f0bdbee6bc-kube-api-access-nsnrd\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.430606 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.433007 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e74f0139-c991-4537-be74-c7b3379389cd-service-ca\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.433069 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e74f0139-c991-4537-be74-c7b3379389cd-console-config\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.444088 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e74f0139-c991-4537-be74-c7b3379389cd-trusted-ca-bundle\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.444836 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e74f0139-c991-4537-be74-c7b3379389cd-oauth-serving-cert\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.446481 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f794d99b-6371-445e-9bb9-74f0bdbee6bc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.447521 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e74f0139-c991-4537-be74-c7b3379389cd-console-serving-cert\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.447779 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f794d99b-6371-445e-9bb9-74f0bdbee6bc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.450852 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f794d99b-6371-445e-9bb9-74f0bdbee6bc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.455443 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.456434 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.457139 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a26fe34-6696-484e-aba7-bf8eb21ff389-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-sj4wp\" (UID: \"2a26fe34-6696-484e-aba7-bf8eb21ff389\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-sj4wp" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.462370 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-config\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.462520 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snr4v\" (UniqueName: \"kubernetes.io/projected/e74f0139-c991-4537-be74-c7b3379389cd-kube-api-access-snr4v\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.473176 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e74f0139-c991-4537-be74-c7b3379389cd-console-oauth-config\") pod \"console-85477fb56d-hhgq2\" (UID: \"e74f0139-c991-4537-be74-c7b3379389cd\") " pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.500543 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsnrd\" (UniqueName: \"kubernetes.io/projected/f794d99b-6371-445e-9bb9-74f0bdbee6bc-kube-api-access-nsnrd\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.554343 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.586703 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-sj4wp" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.589035 4678 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.589065 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/57a7f3ff30c2cd09ff1e8e65689295a1eec29ca4dace6e801961241a67275580/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.662025 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\") pod \"prometheus-metric-storage-0\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.877921 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-vq5ht" Nov 24 11:35:29 crc kubenswrapper[4678]: I1124 11:35:29.885481 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 11:35:30 crc kubenswrapper[4678]: I1124 11:35:30.713945 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.439134 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-blf4t"] Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.440961 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.444973 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.445259 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.445696 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qrstp" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.459798 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-blf4t"] Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.489480 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/de344c51-a739-44dc-b0a2-914839d40a8b-var-run\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.489529 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de344c51-a739-44dc-b0a2-914839d40a8b-scripts\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.489591 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/de344c51-a739-44dc-b0a2-914839d40a8b-var-run-ovn\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.489640 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxdvx\" (UniqueName: \"kubernetes.io/projected/de344c51-a739-44dc-b0a2-914839d40a8b-kube-api-access-lxdvx\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.489690 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/de344c51-a739-44dc-b0a2-914839d40a8b-var-log-ovn\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.489737 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/de344c51-a739-44dc-b0a2-914839d40a8b-ovn-controller-tls-certs\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.489765 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de344c51-a739-44dc-b0a2-914839d40a8b-combined-ca-bundle\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.523051 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-xnsx2"] Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.526031 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.530745 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xnsx2"] Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.592069 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9a9841c-3831-4419-a66f-0c84a801082f-scripts\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.592157 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d9a9841c-3831-4419-a66f-0c84a801082f-var-log\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.592196 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9a9841c-3831-4419-a66f-0c84a801082f-var-run\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.592260 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d9a9841c-3831-4419-a66f-0c84a801082f-etc-ovs\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.592332 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxdvx\" (UniqueName: \"kubernetes.io/projected/de344c51-a739-44dc-b0a2-914839d40a8b-kube-api-access-lxdvx\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.592350 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d9a9841c-3831-4419-a66f-0c84a801082f-var-lib\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.592389 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/de344c51-a739-44dc-b0a2-914839d40a8b-var-log-ovn\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.592466 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/de344c51-a739-44dc-b0a2-914839d40a8b-ovn-controller-tls-certs\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.592512 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77shs\" (UniqueName: \"kubernetes.io/projected/d9a9841c-3831-4419-a66f-0c84a801082f-kube-api-access-77shs\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.592539 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de344c51-a739-44dc-b0a2-914839d40a8b-combined-ca-bundle\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.592585 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/de344c51-a739-44dc-b0a2-914839d40a8b-var-run\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.592615 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de344c51-a739-44dc-b0a2-914839d40a8b-scripts\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.592703 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/de344c51-a739-44dc-b0a2-914839d40a8b-var-run-ovn\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.593185 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/de344c51-a739-44dc-b0a2-914839d40a8b-var-run\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.593211 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/de344c51-a739-44dc-b0a2-914839d40a8b-var-log-ovn\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.593291 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/de344c51-a739-44dc-b0a2-914839d40a8b-var-run-ovn\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.595097 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de344c51-a739-44dc-b0a2-914839d40a8b-scripts\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.602401 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de344c51-a739-44dc-b0a2-914839d40a8b-combined-ca-bundle\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.602819 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/de344c51-a739-44dc-b0a2-914839d40a8b-ovn-controller-tls-certs\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.612401 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxdvx\" (UniqueName: \"kubernetes.io/projected/de344c51-a739-44dc-b0a2-914839d40a8b-kube-api-access-lxdvx\") pod \"ovn-controller-blf4t\" (UID: \"de344c51-a739-44dc-b0a2-914839d40a8b\") " pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.699018 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9a9841c-3831-4419-a66f-0c84a801082f-scripts\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.699072 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d9a9841c-3831-4419-a66f-0c84a801082f-var-log\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.699105 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9a9841c-3831-4419-a66f-0c84a801082f-var-run\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.699128 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d9a9841c-3831-4419-a66f-0c84a801082f-etc-ovs\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.699150 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d9a9841c-3831-4419-a66f-0c84a801082f-var-lib\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.699222 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77shs\" (UniqueName: \"kubernetes.io/projected/d9a9841c-3831-4419-a66f-0c84a801082f-kube-api-access-77shs\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.699959 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d9a9841c-3831-4419-a66f-0c84a801082f-var-log\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.700059 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9a9841c-3831-4419-a66f-0c84a801082f-var-run\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.700146 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d9a9841c-3831-4419-a66f-0c84a801082f-etc-ovs\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.700181 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d9a9841c-3831-4419-a66f-0c84a801082f-var-lib\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.701435 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9a9841c-3831-4419-a66f-0c84a801082f-scripts\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.743365 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77shs\" (UniqueName: \"kubernetes.io/projected/d9a9841c-3831-4419-a66f-0c84a801082f-kube-api-access-77shs\") pod \"ovn-controller-ovs-xnsx2\" (UID: \"d9a9841c-3831-4419-a66f-0c84a801082f\") " pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.822491 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-blf4t" Nov 24 11:35:31 crc kubenswrapper[4678]: I1124 11:35:31.850168 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.348143 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.349886 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.354298 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tpzks" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.354386 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.354745 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.355104 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.355320 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.374316 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.445288 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78965549-7245-45c4-a523-132073321076-config\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.445359 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78965549-7245-45c4-a523-132073321076-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.445394 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/78965549-7245-45c4-a523-132073321076-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.445427 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/78965549-7245-45c4-a523-132073321076-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.445448 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/78965549-7245-45c4-a523-132073321076-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.445779 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78965549-7245-45c4-a523-132073321076-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.445855 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.445929 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql8q5\" (UniqueName: \"kubernetes.io/projected/78965549-7245-45c4-a523-132073321076-kube-api-access-ql8q5\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.546957 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78965549-7245-45c4-a523-132073321076-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.547018 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/78965549-7245-45c4-a523-132073321076-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.547038 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/78965549-7245-45c4-a523-132073321076-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.547061 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/78965549-7245-45c4-a523-132073321076-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.547145 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78965549-7245-45c4-a523-132073321076-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.547175 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.547200 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql8q5\" (UniqueName: \"kubernetes.io/projected/78965549-7245-45c4-a523-132073321076-kube-api-access-ql8q5\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.547232 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78965549-7245-45c4-a523-132073321076-config\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.548061 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78965549-7245-45c4-a523-132073321076-config\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.548372 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/78965549-7245-45c4-a523-132073321076-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.551323 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78965549-7245-45c4-a523-132073321076-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.551342 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.555414 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/78965549-7245-45c4-a523-132073321076-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.557612 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78965549-7245-45c4-a523-132073321076-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.562185 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/78965549-7245-45c4-a523-132073321076-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.573649 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql8q5\" (UniqueName: \"kubernetes.io/projected/78965549-7245-45c4-a523-132073321076-kube-api-access-ql8q5\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.591502 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"78965549-7245-45c4-a523-132073321076\") " pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:32 crc kubenswrapper[4678]: I1124 11:35:32.677200 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 11:35:34 crc kubenswrapper[4678]: I1124 11:35:34.804071 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"559dccdf-14d1-43da-9acf-ddc0ae3fef0a","Type":"ContainerStarted","Data":"2a445f2e0ce9ec03cd2727c665040dde0f76f621257fc25221a4eafb2e87fd99"} Nov 24 11:35:34 crc kubenswrapper[4678]: I1124 11:35:34.966183 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 11:35:34 crc kubenswrapper[4678]: I1124 11:35:34.969562 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:34 crc kubenswrapper[4678]: I1124 11:35:34.973911 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-x992r" Nov 24 11:35:34 crc kubenswrapper[4678]: I1124 11:35:34.975887 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 24 11:35:34 crc kubenswrapper[4678]: I1124 11:35:34.976162 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 24 11:35:34 crc kubenswrapper[4678]: I1124 11:35:34.976323 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 24 11:35:34 crc kubenswrapper[4678]: I1124 11:35:34.984153 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.112255 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpcs4\" (UniqueName: \"kubernetes.io/projected/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-kube-api-access-rpcs4\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.112302 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.112397 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.112475 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-config\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.112499 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.112526 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.112772 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.112868 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.234443 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-config\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.234502 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.234541 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.234581 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.234614 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.234647 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpcs4\" (UniqueName: \"kubernetes.io/projected/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-kube-api-access-rpcs4\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.234662 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.234727 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.236014 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-config\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.236242 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.236387 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.236535 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.244577 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.251389 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.253738 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpcs4\" (UniqueName: \"kubernetes.io/projected/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-kube-api-access-rpcs4\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.255974 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cbb9f62-41c9-4c77-b572-e14fb76a8b45-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.266018 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9cbb9f62-41c9-4c77-b572-e14fb76a8b45\") " pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:35 crc kubenswrapper[4678]: I1124 11:35:35.310820 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 11:35:39 crc kubenswrapper[4678]: I1124 11:35:39.437783 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 11:35:39 crc kubenswrapper[4678]: E1124 11:35:39.986793 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 11:35:39 crc kubenswrapper[4678]: E1124 11:35:39.987123 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm4p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-c9lwd_openstack(a28ca887-c236-4d83-b986-b24cebcad30f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:35:39 crc kubenswrapper[4678]: E1124 11:35:39.988375 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-c9lwd" podUID="a28ca887-c236-4d83-b986-b24cebcad30f" Nov 24 11:35:41 crc kubenswrapper[4678]: W1124 11:35:41.002859 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f4675f4_74be_4f56_a3a6_d7e6aea34614.slice/crio-ca5051444a63b69598ad4c05bd759c56e30f15a020ea7ad86e54ab64891c4dd7 WatchSource:0}: Error finding container ca5051444a63b69598ad4c05bd759c56e30f15a020ea7ad86e54ab64891c4dd7: Status 404 returned error can't find the container with id ca5051444a63b69598ad4c05bd759c56e30f15a020ea7ad86e54ab64891c4dd7 Nov 24 11:35:41 crc kubenswrapper[4678]: E1124 11:35:41.062314 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 11:35:41 crc kubenswrapper[4678]: E1124 11:35:41.063050 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2cmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-nn2mw_openstack(843fa9b2-c463-4aec-9aa9-4bb76febbdf3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:35:41 crc kubenswrapper[4678]: E1124 11:35:41.064886 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-nn2mw" podUID="843fa9b2-c463-4aec-9aa9-4bb76febbdf3" Nov 24 11:35:41 crc kubenswrapper[4678]: I1124 11:35:41.502814 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:35:41 crc kubenswrapper[4678]: I1124 11:35:41.969805 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8f4675f4-74be-4f56-a3a6-d7e6aea34614","Type":"ContainerStarted","Data":"ca5051444a63b69598ad4c05bd759c56e30f15a020ea7ad86e54ab64891c4dd7"} Nov 24 11:35:41 crc kubenswrapper[4678]: I1124 11:35:41.979060 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c9lwd" Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.105147 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28ca887-c236-4d83-b986-b24cebcad30f-config\") pod \"a28ca887-c236-4d83-b986-b24cebcad30f\" (UID: \"a28ca887-c236-4d83-b986-b24cebcad30f\") " Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.105870 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm4p2\" (UniqueName: \"kubernetes.io/projected/a28ca887-c236-4d83-b986-b24cebcad30f-kube-api-access-rm4p2\") pod \"a28ca887-c236-4d83-b986-b24cebcad30f\" (UID: \"a28ca887-c236-4d83-b986-b24cebcad30f\") " Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.105684 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a28ca887-c236-4d83-b986-b24cebcad30f-config" (OuterVolumeSpecName: "config") pod "a28ca887-c236-4d83-b986-b24cebcad30f" (UID: "a28ca887-c236-4d83-b986-b24cebcad30f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.106563 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28ca887-c236-4d83-b986-b24cebcad30f-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.110852 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a28ca887-c236-4d83-b986-b24cebcad30f-kube-api-access-rm4p2" (OuterVolumeSpecName: "kube-api-access-rm4p2") pod "a28ca887-c236-4d83-b986-b24cebcad30f" (UID: "a28ca887-c236-4d83-b986-b24cebcad30f"). InnerVolumeSpecName "kube-api-access-rm4p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.208677 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm4p2\" (UniqueName: \"kubernetes.io/projected/a28ca887-c236-4d83-b986-b24cebcad30f-kube-api-access-rm4p2\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.627210 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nn2mw" Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.740491 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2cmd\" (UniqueName: \"kubernetes.io/projected/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-kube-api-access-x2cmd\") pod \"843fa9b2-c463-4aec-9aa9-4bb76febbdf3\" (UID: \"843fa9b2-c463-4aec-9aa9-4bb76febbdf3\") " Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.741215 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-dns-svc\") pod \"843fa9b2-c463-4aec-9aa9-4bb76febbdf3\" (UID: \"843fa9b2-c463-4aec-9aa9-4bb76febbdf3\") " Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.741311 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-config\") pod \"843fa9b2-c463-4aec-9aa9-4bb76febbdf3\" (UID: \"843fa9b2-c463-4aec-9aa9-4bb76febbdf3\") " Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.742337 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-config" (OuterVolumeSpecName: "config") pod "843fa9b2-c463-4aec-9aa9-4bb76febbdf3" (UID: "843fa9b2-c463-4aec-9aa9-4bb76febbdf3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.743611 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "843fa9b2-c463-4aec-9aa9-4bb76febbdf3" (UID: "843fa9b2-c463-4aec-9aa9-4bb76febbdf3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.749359 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-kube-api-access-x2cmd" (OuterVolumeSpecName: "kube-api-access-x2cmd") pod "843fa9b2-c463-4aec-9aa9-4bb76febbdf3" (UID: "843fa9b2-c463-4aec-9aa9-4bb76febbdf3"). InnerVolumeSpecName "kube-api-access-x2cmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.844687 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.844723 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2cmd\" (UniqueName: \"kubernetes.io/projected/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-kube-api-access-x2cmd\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.844739 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/843fa9b2-c463-4aec-9aa9-4bb76febbdf3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.916433 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-nn2mw" event={"ID":"843fa9b2-c463-4aec-9aa9-4bb76febbdf3","Type":"ContainerDied","Data":"af6ef42c4737d85f5e4bba1fb40cab68c823f66df47d1e18c3314ad8ed29b961"} Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.916520 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nn2mw" Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.922774 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"559dccdf-14d1-43da-9acf-ddc0ae3fef0a","Type":"ContainerStarted","Data":"d142317829d50124774a2d9f963c8d1259153f9788c2e3d836ad2e91827d8a5a"} Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.923004 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.924025 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6","Type":"ContainerStarted","Data":"ed166c84ee9e1a2992d93fde899118b8229afb9b6a3f2724184ae772123537ab"} Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.925645 4678 generic.go:334] "Generic (PLEG): container finished" podID="552a3202-f209-4a9f-9ea9-da67d793daaa" containerID="2fc98ab2b9f8562f3be0327687a56d6e6003263ca07a00c427048904d995300e" exitCode=0 Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.925701 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gkz44" event={"ID":"552a3202-f209-4a9f-9ea9-da67d793daaa","Type":"ContainerDied","Data":"2fc98ab2b9f8562f3be0327687a56d6e6003263ca07a00c427048904d995300e"} Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.927785 4678 generic.go:334] "Generic (PLEG): container finished" podID="fcc92e56-646f-4646-817a-cea16263dc09" containerID="539ee93eecf88d77b7add509e9eb56e012668944a04af235b44d5404d8dfe580" exitCode=0 Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.927866 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" event={"ID":"fcc92e56-646f-4646-817a-cea16263dc09","Type":"ContainerDied","Data":"539ee93eecf88d77b7add509e9eb56e012668944a04af235b44d5404d8dfe580"} Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.929074 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-c9lwd" event={"ID":"a28ca887-c236-4d83-b986-b24cebcad30f","Type":"ContainerDied","Data":"9814b81efadc408d9d8720a2a6509cc3c1a6e6a5f8d18f6877fdca4499a7eedf"} Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.929103 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c9lwd" Nov 24 11:35:42 crc kubenswrapper[4678]: I1124 11:35:42.947410 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=9.886241112 podStartE2EDuration="17.947393203s" podCreationTimestamp="2025-11-24 11:35:25 +0000 UTC" firstStartedPulling="2025-11-24 11:35:33.957862855 +0000 UTC m=+1144.888922494" lastFinishedPulling="2025-11-24 11:35:42.019014946 +0000 UTC m=+1152.950074585" observedRunningTime="2025-11-24 11:35:42.941639 +0000 UTC m=+1153.872698639" watchObservedRunningTime="2025-11-24 11:35:42.947393203 +0000 UTC m=+1153.878452842" Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.090760 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nn2mw"] Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.106314 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nn2mw"] Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.123707 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c9lwd"] Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.135430 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c9lwd"] Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.143589 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.153817 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-sj4wp"] Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.163061 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.170397 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85477fb56d-hhgq2"] Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.341911 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-blf4t"] Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.352045 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 11:35:43 crc kubenswrapper[4678]: W1124 11:35:43.482588 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a26fe34_6696_484e_aba7_bf8eb21ff389.slice/crio-e97dbe59113e1010f64f647130c9c562bc58233faea42ca1a824700fec73cf7a WatchSource:0}: Error finding container e97dbe59113e1010f64f647130c9c562bc58233faea42ca1a824700fec73cf7a: Status 404 returned error can't find the container with id e97dbe59113e1010f64f647130c9c562bc58233faea42ca1a824700fec73cf7a Nov 24 11:35:43 crc kubenswrapper[4678]: W1124 11:35:43.486316 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25fc6cbb_a91d_4c54_9736_5684da015680.slice/crio-83d50a39e89d890aa4c3730c95c497604993c8e0a481b52edae25237145fbc7f WatchSource:0}: Error finding container 83d50a39e89d890aa4c3730c95c497604993c8e0a481b52edae25237145fbc7f: Status 404 returned error can't find the container with id 83d50a39e89d890aa4c3730c95c497604993c8e0a481b52edae25237145fbc7f Nov 24 11:35:43 crc kubenswrapper[4678]: W1124 11:35:43.490243 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddc6efef_042b_489a_a545_669ec3783e86.slice/crio-1e71d5d5aff9c10975e3dc5807721568c40b9c2f3181d551a3397954fb734bb4 WatchSource:0}: Error finding container 1e71d5d5aff9c10975e3dc5807721568c40b9c2f3181d551a3397954fb734bb4: Status 404 returned error can't find the container with id 1e71d5d5aff9c10975e3dc5807721568c40b9c2f3181d551a3397954fb734bb4 Nov 24 11:35:43 crc kubenswrapper[4678]: W1124 11:35:43.497852 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode74f0139_c991_4537_be74_c7b3379389cd.slice/crio-ef70ab30f2d1b8bae224f180ad5414cbdbfe5e037ac0458608d7063acbc4c067 WatchSource:0}: Error finding container ef70ab30f2d1b8bae224f180ad5414cbdbfe5e037ac0458608d7063acbc4c067: Status 404 returned error can't find the container with id ef70ab30f2d1b8bae224f180ad5414cbdbfe5e037ac0458608d7063acbc4c067 Nov 24 11:35:43 crc kubenswrapper[4678]: W1124 11:35:43.507321 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf794d99b_6371_445e_9bb9_74f0bdbee6bc.slice/crio-d61fc5e8e03ee1bf1870bd41700b1fb19a2db9b2487241e023e9a36e8572cea7 WatchSource:0}: Error finding container d61fc5e8e03ee1bf1870bd41700b1fb19a2db9b2487241e023e9a36e8572cea7: Status 404 returned error can't find the container with id d61fc5e8e03ee1bf1870bd41700b1fb19a2db9b2487241e023e9a36e8572cea7 Nov 24 11:35:43 crc kubenswrapper[4678]: W1124 11:35:43.507544 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde344c51_a739_44dc_b0a2_914839d40a8b.slice/crio-ae6e6859cf96d5f629eb24ab2fd4e7fdee3d22d09257cd07bcc998135585a718 WatchSource:0}: Error finding container ae6e6859cf96d5f629eb24ab2fd4e7fdee3d22d09257cd07bcc998135585a718: Status 404 returned error can't find the container with id ae6e6859cf96d5f629eb24ab2fd4e7fdee3d22d09257cd07bcc998135585a718 Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.542206 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 11:35:43 crc kubenswrapper[4678]: W1124 11:35:43.547333 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78965549_7245_45c4_a523_132073321076.slice/crio-9e2f369c0745142d450518cbdc234409ea290fee547b5aee207bc047c9f0977e WatchSource:0}: Error finding container 9e2f369c0745142d450518cbdc234409ea290fee547b5aee207bc047c9f0977e: Status 404 returned error can't find the container with id 9e2f369c0745142d450518cbdc234409ea290fee547b5aee207bc047c9f0977e Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.624995 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xnsx2"] Nov 24 11:35:43 crc kubenswrapper[4678]: W1124 11:35:43.627362 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9a9841c_3831_4419_a66f_0c84a801082f.slice/crio-0b8e5ebf59217adfee980224bf0948cf7c1ff1f73e232fa0dfe1fc2ae51f52a6 WatchSource:0}: Error finding container 0b8e5ebf59217adfee980224bf0948cf7c1ff1f73e232fa0dfe1fc2ae51f52a6: Status 404 returned error can't find the container with id 0b8e5ebf59217adfee980224bf0948cf7c1ff1f73e232fa0dfe1fc2ae51f52a6 Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.910948 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="843fa9b2-c463-4aec-9aa9-4bb76febbdf3" path="/var/lib/kubelet/pods/843fa9b2-c463-4aec-9aa9-4bb76febbdf3/volumes" Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.911368 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a28ca887-c236-4d83-b986-b24cebcad30f" path="/var/lib/kubelet/pods/a28ca887-c236-4d83-b986-b24cebcad30f/volumes" Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.939106 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-sj4wp" event={"ID":"2a26fe34-6696-484e-aba7-bf8eb21ff389","Type":"ContainerStarted","Data":"e97dbe59113e1010f64f647130c9c562bc58233faea42ca1a824700fec73cf7a"} Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.940123 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85477fb56d-hhgq2" event={"ID":"e74f0139-c991-4537-be74-c7b3379389cd","Type":"ContainerStarted","Data":"ef70ab30f2d1b8bae224f180ad5414cbdbfe5e037ac0458608d7063acbc4c067"} Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.942199 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25fc6cbb-a91d-4c54-9736-5684da015680","Type":"ContainerStarted","Data":"83d50a39e89d890aa4c3730c95c497604993c8e0a481b52edae25237145fbc7f"} Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.943829 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"78965549-7245-45c4-a523-132073321076","Type":"ContainerStarted","Data":"9e2f369c0745142d450518cbdc234409ea290fee547b5aee207bc047c9f0977e"} Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.945055 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f794d99b-6371-445e-9bb9-74f0bdbee6bc","Type":"ContainerStarted","Data":"d61fc5e8e03ee1bf1870bd41700b1fb19a2db9b2487241e023e9a36e8572cea7"} Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.946458 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-blf4t" event={"ID":"de344c51-a739-44dc-b0a2-914839d40a8b","Type":"ContainerStarted","Data":"ae6e6859cf96d5f629eb24ab2fd4e7fdee3d22d09257cd07bcc998135585a718"} Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.947797 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xnsx2" event={"ID":"d9a9841c-3831-4419-a66f-0c84a801082f","Type":"ContainerStarted","Data":"0b8e5ebf59217adfee980224bf0948cf7c1ff1f73e232fa0dfe1fc2ae51f52a6"} Nov 24 11:35:43 crc kubenswrapper[4678]: I1124 11:35:43.949211 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ddc6efef-042b-489a-a545-669ec3783e86","Type":"ContainerStarted","Data":"1e71d5d5aff9c10975e3dc5807721568c40b9c2f3181d551a3397954fb734bb4"} Nov 24 11:35:44 crc kubenswrapper[4678]: I1124 11:35:44.061990 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 11:35:44 crc kubenswrapper[4678]: W1124 11:35:44.078339 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cbb9f62_41c9_4c77_b572_e14fb76a8b45.slice/crio-c551aa1b07ff7307d70fd0777a8aef39e538f32cf3ea96c44b2977688fd7f60e WatchSource:0}: Error finding container c551aa1b07ff7307d70fd0777a8aef39e538f32cf3ea96c44b2977688fd7f60e: Status 404 returned error can't find the container with id c551aa1b07ff7307d70fd0777a8aef39e538f32cf3ea96c44b2977688fd7f60e Nov 24 11:35:44 crc kubenswrapper[4678]: I1124 11:35:44.967506 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" event={"ID":"fcc92e56-646f-4646-817a-cea16263dc09","Type":"ContainerStarted","Data":"f9f802160f5a83e501e9f415d23c26d959ec6b4be0052edd2bee8565affc49a0"} Nov 24 11:35:44 crc kubenswrapper[4678]: I1124 11:35:44.968096 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" Nov 24 11:35:44 crc kubenswrapper[4678]: I1124 11:35:44.970361 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"728e8f13-52c5-4b48-9fff-8053732311b9","Type":"ContainerStarted","Data":"3a7b5ef4c4fa5ee85ae38f98dba7ea094ecd28d33191e8a701dfe02bc4368e70"} Nov 24 11:35:44 crc kubenswrapper[4678]: I1124 11:35:44.974707 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85477fb56d-hhgq2" event={"ID":"e74f0139-c991-4537-be74-c7b3379389cd","Type":"ContainerStarted","Data":"ce61708d8ea5a6df2eb174af56ee1c7014d08e84e36b22e9c4b108b9e8b42163"} Nov 24 11:35:44 crc kubenswrapper[4678]: I1124 11:35:44.977385 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6","Type":"ContainerStarted","Data":"78a42a92af69cea2096a817c36fa21b3dd0f79b6d7fef3c6e4842c308a764028"} Nov 24 11:35:44 crc kubenswrapper[4678]: I1124 11:35:44.980082 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gkz44" event={"ID":"552a3202-f209-4a9f-9ea9-da67d793daaa","Type":"ContainerStarted","Data":"152019e3d563ad51ba5da00b6005201e936139a854b103243cedab4ea7813ee3"} Nov 24 11:35:44 crc kubenswrapper[4678]: I1124 11:35:44.980276 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-gkz44" Nov 24 11:35:44 crc kubenswrapper[4678]: I1124 11:35:44.982877 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9cbb9f62-41c9-4c77-b572-e14fb76a8b45","Type":"ContainerStarted","Data":"c551aa1b07ff7307d70fd0777a8aef39e538f32cf3ea96c44b2977688fd7f60e"} Nov 24 11:35:45 crc kubenswrapper[4678]: I1124 11:35:45.008118 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" podStartSLOduration=4.930700984 podStartE2EDuration="24.008077215s" podCreationTimestamp="2025-11-24 11:35:21 +0000 UTC" firstStartedPulling="2025-11-24 11:35:22.859749527 +0000 UTC m=+1133.790809166" lastFinishedPulling="2025-11-24 11:35:41.937125768 +0000 UTC m=+1152.868185397" observedRunningTime="2025-11-24 11:35:44.987154501 +0000 UTC m=+1155.918214160" watchObservedRunningTime="2025-11-24 11:35:45.008077215 +0000 UTC m=+1155.939136854" Nov 24 11:35:45 crc kubenswrapper[4678]: I1124 11:35:45.041733 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-85477fb56d-hhgq2" podStartSLOduration=16.041714105 podStartE2EDuration="16.041714105s" podCreationTimestamp="2025-11-24 11:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:35:45.031856104 +0000 UTC m=+1155.962915753" watchObservedRunningTime="2025-11-24 11:35:45.041714105 +0000 UTC m=+1155.972773744" Nov 24 11:35:45 crc kubenswrapper[4678]: I1124 11:35:45.068086 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-gkz44" podStartSLOduration=4.573699611 podStartE2EDuration="24.068059352s" podCreationTimestamp="2025-11-24 11:35:21 +0000 UTC" firstStartedPulling="2025-11-24 11:35:22.525765035 +0000 UTC m=+1133.456824674" lastFinishedPulling="2025-11-24 11:35:42.020124776 +0000 UTC m=+1152.951184415" observedRunningTime="2025-11-24 11:35:45.053377474 +0000 UTC m=+1155.984437113" watchObservedRunningTime="2025-11-24 11:35:45.068059352 +0000 UTC m=+1155.999118991" Nov 24 11:35:49 crc kubenswrapper[4678]: I1124 11:35:49.555436 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:49 crc kubenswrapper[4678]: I1124 11:35:49.556284 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:49 crc kubenswrapper[4678]: I1124 11:35:49.563008 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:50 crc kubenswrapper[4678]: I1124 11:35:50.040129 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-85477fb56d-hhgq2" Nov 24 11:35:50 crc kubenswrapper[4678]: I1124 11:35:50.143328 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b6d66f75b-9j4v9"] Nov 24 11:35:51 crc kubenswrapper[4678]: I1124 11:35:51.020853 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 24 11:35:51 crc kubenswrapper[4678]: I1124 11:35:51.797878 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-gkz44" Nov 24 11:35:52 crc kubenswrapper[4678]: I1124 11:35:52.260870 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" Nov 24 11:35:52 crc kubenswrapper[4678]: I1124 11:35:52.319630 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gkz44"] Nov 24 11:35:52 crc kubenswrapper[4678]: I1124 11:35:52.320166 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-gkz44" podUID="552a3202-f209-4a9f-9ea9-da67d793daaa" containerName="dnsmasq-dns" containerID="cri-o://152019e3d563ad51ba5da00b6005201e936139a854b103243cedab4ea7813ee3" gracePeriod=10 Nov 24 11:35:54 crc kubenswrapper[4678]: I1124 11:35:54.074765 4678 generic.go:334] "Generic (PLEG): container finished" podID="552a3202-f209-4a9f-9ea9-da67d793daaa" containerID="152019e3d563ad51ba5da00b6005201e936139a854b103243cedab4ea7813ee3" exitCode=0 Nov 24 11:35:54 crc kubenswrapper[4678]: I1124 11:35:54.074831 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gkz44" event={"ID":"552a3202-f209-4a9f-9ea9-da67d793daaa","Type":"ContainerDied","Data":"152019e3d563ad51ba5da00b6005201e936139a854b103243cedab4ea7813ee3"} Nov 24 11:35:54 crc kubenswrapper[4678]: I1124 11:35:54.950271 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gkz44" Nov 24 11:35:55 crc kubenswrapper[4678]: I1124 11:35:55.090191 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gkz44" event={"ID":"552a3202-f209-4a9f-9ea9-da67d793daaa","Type":"ContainerDied","Data":"732bebe2256f8615bae89a9b6779cf3ad70a0c0066791e36bda957e2737f531a"} Nov 24 11:35:55 crc kubenswrapper[4678]: I1124 11:35:55.090241 4678 scope.go:117] "RemoveContainer" containerID="152019e3d563ad51ba5da00b6005201e936139a854b103243cedab4ea7813ee3" Nov 24 11:35:55 crc kubenswrapper[4678]: I1124 11:35:55.090367 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gkz44" Nov 24 11:35:55 crc kubenswrapper[4678]: I1124 11:35:55.119934 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2ffx\" (UniqueName: \"kubernetes.io/projected/552a3202-f209-4a9f-9ea9-da67d793daaa-kube-api-access-x2ffx\") pod \"552a3202-f209-4a9f-9ea9-da67d793daaa\" (UID: \"552a3202-f209-4a9f-9ea9-da67d793daaa\") " Nov 24 11:35:55 crc kubenswrapper[4678]: I1124 11:35:55.120027 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/552a3202-f209-4a9f-9ea9-da67d793daaa-dns-svc\") pod \"552a3202-f209-4a9f-9ea9-da67d793daaa\" (UID: \"552a3202-f209-4a9f-9ea9-da67d793daaa\") " Nov 24 11:35:55 crc kubenswrapper[4678]: I1124 11:35:55.120068 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/552a3202-f209-4a9f-9ea9-da67d793daaa-config\") pod \"552a3202-f209-4a9f-9ea9-da67d793daaa\" (UID: \"552a3202-f209-4a9f-9ea9-da67d793daaa\") " Nov 24 11:35:55 crc kubenswrapper[4678]: I1124 11:35:55.129977 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552a3202-f209-4a9f-9ea9-da67d793daaa-kube-api-access-x2ffx" (OuterVolumeSpecName: "kube-api-access-x2ffx") pod "552a3202-f209-4a9f-9ea9-da67d793daaa" (UID: "552a3202-f209-4a9f-9ea9-da67d793daaa"). InnerVolumeSpecName "kube-api-access-x2ffx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:35:55 crc kubenswrapper[4678]: I1124 11:35:55.165536 4678 scope.go:117] "RemoveContainer" containerID="2fc98ab2b9f8562f3be0327687a56d6e6003263ca07a00c427048904d995300e" Nov 24 11:35:55 crc kubenswrapper[4678]: I1124 11:35:55.174065 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/552a3202-f209-4a9f-9ea9-da67d793daaa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "552a3202-f209-4a9f-9ea9-da67d793daaa" (UID: "552a3202-f209-4a9f-9ea9-da67d793daaa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:35:55 crc kubenswrapper[4678]: I1124 11:35:55.184759 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/552a3202-f209-4a9f-9ea9-da67d793daaa-config" (OuterVolumeSpecName: "config") pod "552a3202-f209-4a9f-9ea9-da67d793daaa" (UID: "552a3202-f209-4a9f-9ea9-da67d793daaa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:35:55 crc kubenswrapper[4678]: I1124 11:35:55.222571 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2ffx\" (UniqueName: \"kubernetes.io/projected/552a3202-f209-4a9f-9ea9-da67d793daaa-kube-api-access-x2ffx\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:55 crc kubenswrapper[4678]: I1124 11:35:55.222608 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/552a3202-f209-4a9f-9ea9-da67d793daaa-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:55 crc kubenswrapper[4678]: I1124 11:35:55.222618 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/552a3202-f209-4a9f-9ea9-da67d793daaa-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:35:55 crc kubenswrapper[4678]: I1124 11:35:55.488264 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gkz44"] Nov 24 11:35:55 crc kubenswrapper[4678]: I1124 11:35:55.497709 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gkz44"] Nov 24 11:35:55 crc kubenswrapper[4678]: I1124 11:35:55.908355 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="552a3202-f209-4a9f-9ea9-da67d793daaa" path="/var/lib/kubelet/pods/552a3202-f209-4a9f-9ea9-da67d793daaa/volumes" Nov 24 11:35:56 crc kubenswrapper[4678]: I1124 11:35:56.099958 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-blf4t" event={"ID":"de344c51-a739-44dc-b0a2-914839d40a8b","Type":"ContainerStarted","Data":"c7b608ac1a020ac7127651e0a3a196450430104f6bfc9ad0c8b9b4c3cc44ed02"} Nov 24 11:35:56 crc kubenswrapper[4678]: I1124 11:35:56.100083 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-blf4t" Nov 24 11:35:56 crc kubenswrapper[4678]: I1124 11:35:56.108171 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25fc6cbb-a91d-4c54-9736-5684da015680","Type":"ContainerStarted","Data":"b226db3b72abf94ade88a979fe9eb120087df9646cb882670e463bcf365d9912"} Nov 24 11:35:56 crc kubenswrapper[4678]: I1124 11:35:56.122606 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-blf4t" podStartSLOduration=13.914812177 podStartE2EDuration="25.122582106s" podCreationTimestamp="2025-11-24 11:35:31 +0000 UTC" firstStartedPulling="2025-11-24 11:35:43.526846193 +0000 UTC m=+1154.457905832" lastFinishedPulling="2025-11-24 11:35:54.734616122 +0000 UTC m=+1165.665675761" observedRunningTime="2025-11-24 11:35:56.116999648 +0000 UTC m=+1167.048059297" watchObservedRunningTime="2025-11-24 11:35:56.122582106 +0000 UTC m=+1167.053641745" Nov 24 11:35:57 crc kubenswrapper[4678]: I1124 11:35:57.120715 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ddc6efef-042b-489a-a545-669ec3783e86","Type":"ContainerStarted","Data":"2e96bdbc0b9bc6563a6ab853bd2cad52c358a4f10d76fba029a0efa39c86cced"} Nov 24 11:35:57 crc kubenswrapper[4678]: I1124 11:35:57.121386 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 11:35:57 crc kubenswrapper[4678]: I1124 11:35:57.122977 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-sj4wp" event={"ID":"2a26fe34-6696-484e-aba7-bf8eb21ff389","Type":"ContainerStarted","Data":"1aedd8173492d05a0797d9266abcf325bf1e3e3b401d60f16f714b8441efb658"} Nov 24 11:35:57 crc kubenswrapper[4678]: I1124 11:35:57.126390 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"78965549-7245-45c4-a523-132073321076","Type":"ContainerStarted","Data":"d65a7ba0d51cc242f3dbeb5d8132037f3ff60c7883ecf094f97e8443d1fbdf57"} Nov 24 11:35:57 crc kubenswrapper[4678]: I1124 11:35:57.130325 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9cbb9f62-41c9-4c77-b572-e14fb76a8b45","Type":"ContainerStarted","Data":"1c268ad1f7343bf8fa6a254b6aac620e8b0ec928c5548e793e69dcf6fbe8b0f2"} Nov 24 11:35:57 crc kubenswrapper[4678]: I1124 11:35:57.134335 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8f4675f4-74be-4f56-a3a6-d7e6aea34614","Type":"ContainerStarted","Data":"cdde208f7d34e9a47bfea0de32ee36d02c4455722026f76349b5c74ff5d03d84"} Nov 24 11:35:57 crc kubenswrapper[4678]: I1124 11:35:57.139240 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=17.311073743 podStartE2EDuration="30.139223989s" podCreationTimestamp="2025-11-24 11:35:27 +0000 UTC" firstStartedPulling="2025-11-24 11:35:43.498583815 +0000 UTC m=+1154.429643454" lastFinishedPulling="2025-11-24 11:35:56.326734061 +0000 UTC m=+1167.257793700" observedRunningTime="2025-11-24 11:35:57.137995007 +0000 UTC m=+1168.069054646" watchObservedRunningTime="2025-11-24 11:35:57.139223989 +0000 UTC m=+1168.070283628" Nov 24 11:35:57 crc kubenswrapper[4678]: I1124 11:35:57.145518 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xnsx2" event={"ID":"d9a9841c-3831-4419-a66f-0c84a801082f","Type":"ContainerStarted","Data":"ce174074a050b4e7b3fbc18a39b3670d4d71d2e23a534e0cb11ab6e976b122e6"} Nov 24 11:35:57 crc kubenswrapper[4678]: I1124 11:35:57.230854 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-sj4wp" podStartSLOduration=17.660475006 podStartE2EDuration="29.230813064s" podCreationTimestamp="2025-11-24 11:35:28 +0000 UTC" firstStartedPulling="2025-11-24 11:35:43.487495001 +0000 UTC m=+1154.418554640" lastFinishedPulling="2025-11-24 11:35:55.057833059 +0000 UTC m=+1165.988892698" observedRunningTime="2025-11-24 11:35:57.186800749 +0000 UTC m=+1168.117860398" watchObservedRunningTime="2025-11-24 11:35:57.230813064 +0000 UTC m=+1168.161872713" Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.211766 4678 generic.go:334] "Generic (PLEG): container finished" podID="d9a9841c-3831-4419-a66f-0c84a801082f" containerID="ce174074a050b4e7b3fbc18a39b3670d4d71d2e23a534e0cb11ab6e976b122e6" exitCode=0 Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.213838 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xnsx2" event={"ID":"d9a9841c-3831-4419-a66f-0c84a801082f","Type":"ContainerDied","Data":"ce174074a050b4e7b3fbc18a39b3670d4d71d2e23a534e0cb11ab6e976b122e6"} Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.315504 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-rpz9v"] Nov 24 11:35:58 crc kubenswrapper[4678]: E1124 11:35:58.316156 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="552a3202-f209-4a9f-9ea9-da67d793daaa" containerName="init" Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.316175 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="552a3202-f209-4a9f-9ea9-da67d793daaa" containerName="init" Nov 24 11:35:58 crc kubenswrapper[4678]: E1124 11:35:58.316190 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="552a3202-f209-4a9f-9ea9-da67d793daaa" containerName="dnsmasq-dns" Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.316197 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="552a3202-f209-4a9f-9ea9-da67d793daaa" containerName="dnsmasq-dns" Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.316415 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="552a3202-f209-4a9f-9ea9-da67d793daaa" containerName="dnsmasq-dns" Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.317846 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.327109 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-rpz9v"] Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.389311 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs8dk\" (UniqueName: \"kubernetes.io/projected/79a6831e-5782-487e-ae5c-88373fb86b78-kube-api-access-rs8dk\") pod \"dnsmasq-dns-7cb5889db5-rpz9v\" (UID: \"79a6831e-5782-487e-ae5c-88373fb86b78\") " pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.389399 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79a6831e-5782-487e-ae5c-88373fb86b78-config\") pod \"dnsmasq-dns-7cb5889db5-rpz9v\" (UID: \"79a6831e-5782-487e-ae5c-88373fb86b78\") " pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.389552 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79a6831e-5782-487e-ae5c-88373fb86b78-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-rpz9v\" (UID: \"79a6831e-5782-487e-ae5c-88373fb86b78\") " pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.491345 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79a6831e-5782-487e-ae5c-88373fb86b78-config\") pod \"dnsmasq-dns-7cb5889db5-rpz9v\" (UID: \"79a6831e-5782-487e-ae5c-88373fb86b78\") " pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.491497 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79a6831e-5782-487e-ae5c-88373fb86b78-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-rpz9v\" (UID: \"79a6831e-5782-487e-ae5c-88373fb86b78\") " pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.491550 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs8dk\" (UniqueName: \"kubernetes.io/projected/79a6831e-5782-487e-ae5c-88373fb86b78-kube-api-access-rs8dk\") pod \"dnsmasq-dns-7cb5889db5-rpz9v\" (UID: \"79a6831e-5782-487e-ae5c-88373fb86b78\") " pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.492896 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79a6831e-5782-487e-ae5c-88373fb86b78-config\") pod \"dnsmasq-dns-7cb5889db5-rpz9v\" (UID: \"79a6831e-5782-487e-ae5c-88373fb86b78\") " pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.493631 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79a6831e-5782-487e-ae5c-88373fb86b78-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-rpz9v\" (UID: \"79a6831e-5782-487e-ae5c-88373fb86b78\") " pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.525728 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs8dk\" (UniqueName: \"kubernetes.io/projected/79a6831e-5782-487e-ae5c-88373fb86b78-kube-api-access-rs8dk\") pod \"dnsmasq-dns-7cb5889db5-rpz9v\" (UID: \"79a6831e-5782-487e-ae5c-88373fb86b78\") " pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" Nov 24 11:35:58 crc kubenswrapper[4678]: I1124 11:35:58.709922 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.280600 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xnsx2" event={"ID":"d9a9841c-3831-4419-a66f-0c84a801082f","Type":"ContainerStarted","Data":"2d5121fb3c29f56f98410c1824c0199c8aedfb34664dcdcc3a66727862104d7d"} Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.335250 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-rpz9v"] Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.463172 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.480188 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.484090 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.489225 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.491218 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.491363 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-x2ckv" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.491577 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.665849 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1a7a4a62-9baa-4df8-ba83-688dc6817249-lock\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.666286 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.666309 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1a7a4a62-9baa-4df8-ba83-688dc6817249-cache\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.666359 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z678v\" (UniqueName: \"kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-kube-api-access-z678v\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.666451 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.768676 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1a7a4a62-9baa-4df8-ba83-688dc6817249-lock\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.768773 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.768805 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1a7a4a62-9baa-4df8-ba83-688dc6817249-cache\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.768856 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z678v\" (UniqueName: \"kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-kube-api-access-z678v\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.768920 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:35:59 crc kubenswrapper[4678]: E1124 11:35:59.769195 4678 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 11:35:59 crc kubenswrapper[4678]: E1124 11:35:59.769224 4678 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 11:35:59 crc kubenswrapper[4678]: E1124 11:35:59.769299 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift podName:1a7a4a62-9baa-4df8-ba83-688dc6817249 nodeName:}" failed. No retries permitted until 2025-11-24 11:36:00.269268014 +0000 UTC m=+1171.200327653 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift") pod "swift-storage-0" (UID: "1a7a4a62-9baa-4df8-ba83-688dc6817249") : configmap "swift-ring-files" not found Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.770060 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.770164 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1a7a4a62-9baa-4df8-ba83-688dc6817249-cache\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.770356 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1a7a4a62-9baa-4df8-ba83-688dc6817249-lock\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.803576 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z678v\" (UniqueName: \"kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-kube-api-access-z678v\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:35:59 crc kubenswrapper[4678]: I1124 11:35:59.812725 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:36:00 crc kubenswrapper[4678]: I1124 11:36:00.286455 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:36:00 crc kubenswrapper[4678]: E1124 11:36:00.286740 4678 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 11:36:00 crc kubenswrapper[4678]: E1124 11:36:00.287022 4678 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 11:36:00 crc kubenswrapper[4678]: E1124 11:36:00.287128 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift podName:1a7a4a62-9baa-4df8-ba83-688dc6817249 nodeName:}" failed. No retries permitted until 2025-11-24 11:36:01.287098782 +0000 UTC m=+1172.218158421 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift") pod "swift-storage-0" (UID: "1a7a4a62-9baa-4df8-ba83-688dc6817249") : configmap "swift-ring-files" not found Nov 24 11:36:00 crc kubenswrapper[4678]: I1124 11:36:00.346616 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f794d99b-6371-445e-9bb9-74f0bdbee6bc","Type":"ContainerStarted","Data":"9379a259a4a78313d7d9ff5af56185af8a366751b747318d10a88f696dab3fed"} Nov 24 11:36:00 crc kubenswrapper[4678]: I1124 11:36:00.377097 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xnsx2" event={"ID":"d9a9841c-3831-4419-a66f-0c84a801082f","Type":"ContainerStarted","Data":"5e199dba3bfb0902d587328700c8bbb76437c4d089e9deb5b8ee07ce169754ac"} Nov 24 11:36:00 crc kubenswrapper[4678]: I1124 11:36:00.377431 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:36:00 crc kubenswrapper[4678]: I1124 11:36:00.377518 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:36:00 crc kubenswrapper[4678]: I1124 11:36:00.390244 4678 generic.go:334] "Generic (PLEG): container finished" podID="79a6831e-5782-487e-ae5c-88373fb86b78" containerID="734612a717bc2b775cbc4364c3cdfbfac8f45de4331b2d96018671b228bd3ee3" exitCode=0 Nov 24 11:36:00 crc kubenswrapper[4678]: I1124 11:36:00.390366 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" event={"ID":"79a6831e-5782-487e-ae5c-88373fb86b78","Type":"ContainerDied","Data":"734612a717bc2b775cbc4364c3cdfbfac8f45de4331b2d96018671b228bd3ee3"} Nov 24 11:36:00 crc kubenswrapper[4678]: I1124 11:36:00.390404 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" event={"ID":"79a6831e-5782-487e-ae5c-88373fb86b78","Type":"ContainerStarted","Data":"f74e74153e50bb24b73565faf82e08a0da19526f616bb4a970bf7ea9a6a6b967"} Nov 24 11:36:00 crc kubenswrapper[4678]: I1124 11:36:00.410741 4678 generic.go:334] "Generic (PLEG): container finished" podID="25fc6cbb-a91d-4c54-9736-5684da015680" containerID="b226db3b72abf94ade88a979fe9eb120087df9646cb882670e463bcf365d9912" exitCode=0 Nov 24 11:36:00 crc kubenswrapper[4678]: I1124 11:36:00.410793 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25fc6cbb-a91d-4c54-9736-5684da015680","Type":"ContainerDied","Data":"b226db3b72abf94ade88a979fe9eb120087df9646cb882670e463bcf365d9912"} Nov 24 11:36:00 crc kubenswrapper[4678]: I1124 11:36:00.421248 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-xnsx2" podStartSLOduration=18.319160382 podStartE2EDuration="29.421213773s" podCreationTimestamp="2025-11-24 11:35:31 +0000 UTC" firstStartedPulling="2025-11-24 11:35:43.632326125 +0000 UTC m=+1154.563385764" lastFinishedPulling="2025-11-24 11:35:54.734379516 +0000 UTC m=+1165.665439155" observedRunningTime="2025-11-24 11:36:00.408296331 +0000 UTC m=+1171.339355980" watchObservedRunningTime="2025-11-24 11:36:00.421213773 +0000 UTC m=+1171.352273412" Nov 24 11:36:01 crc kubenswrapper[4678]: I1124 11:36:01.327769 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:36:01 crc kubenswrapper[4678]: E1124 11:36:01.328147 4678 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 11:36:01 crc kubenswrapper[4678]: E1124 11:36:01.328398 4678 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 11:36:01 crc kubenswrapper[4678]: E1124 11:36:01.328503 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift podName:1a7a4a62-9baa-4df8-ba83-688dc6817249 nodeName:}" failed. No retries permitted until 2025-11-24 11:36:03.328468431 +0000 UTC m=+1174.259528090 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift") pod "swift-storage-0" (UID: "1a7a4a62-9baa-4df8-ba83-688dc6817249") : configmap "swift-ring-files" not found Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.302009 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-4wb58"] Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.303937 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.307140 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.307369 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.307496 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.313870 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4wb58"] Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.395436 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d9fedfc-2539-44c3-9124-7b5c96af23da-scripts\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.396096 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-dispersionconf\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.396145 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1d9fedfc-2539-44c3-9124-7b5c96af23da-ring-data-devices\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.396191 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-swiftconf\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.396242 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7x5f\" (UniqueName: \"kubernetes.io/projected/1d9fedfc-2539-44c3-9124-7b5c96af23da-kube-api-access-t7x5f\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.396336 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1d9fedfc-2539-44c3-9124-7b5c96af23da-etc-swift\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.396397 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:36:03 crc kubenswrapper[4678]: E1124 11:36:03.396692 4678 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 11:36:03 crc kubenswrapper[4678]: E1124 11:36:03.396736 4678 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 11:36:03 crc kubenswrapper[4678]: E1124 11:36:03.396799 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift podName:1a7a4a62-9baa-4df8-ba83-688dc6817249 nodeName:}" failed. No retries permitted until 2025-11-24 11:36:07.396773335 +0000 UTC m=+1178.327833154 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift") pod "swift-storage-0" (UID: "1a7a4a62-9baa-4df8-ba83-688dc6817249") : configmap "swift-ring-files" not found Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.396941 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-combined-ca-bundle\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.451229 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25fc6cbb-a91d-4c54-9736-5684da015680","Type":"ContainerStarted","Data":"44c1041389cdf0de35e70d58288e627c8d561ea142b55d3de31a9991149d1bbd"} Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.456938 4678 generic.go:334] "Generic (PLEG): container finished" podID="8f4675f4-74be-4f56-a3a6-d7e6aea34614" containerID="cdde208f7d34e9a47bfea0de32ee36d02c4455722026f76349b5c74ff5d03d84" exitCode=0 Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.457037 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8f4675f4-74be-4f56-a3a6-d7e6aea34614","Type":"ContainerDied","Data":"cdde208f7d34e9a47bfea0de32ee36d02c4455722026f76349b5c74ff5d03d84"} Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.463306 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" event={"ID":"79a6831e-5782-487e-ae5c-88373fb86b78","Type":"ContainerStarted","Data":"0257598a4d034508c045dd83dc56e2cd68be8c5023f542b7992dfae6f8806664"} Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.463978 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.487096 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.927078241 podStartE2EDuration="39.487080616s" podCreationTimestamp="2025-11-24 11:35:24 +0000 UTC" firstStartedPulling="2025-11-24 11:35:43.497450294 +0000 UTC m=+1154.428509933" lastFinishedPulling="2025-11-24 11:35:55.057452639 +0000 UTC m=+1165.988512308" observedRunningTime="2025-11-24 11:36:03.479073073 +0000 UTC m=+1174.410132712" watchObservedRunningTime="2025-11-24 11:36:03.487080616 +0000 UTC m=+1174.418140255" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.502915 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1d9fedfc-2539-44c3-9124-7b5c96af23da-ring-data-devices\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.503008 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-dispersionconf\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.503065 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-swiftconf\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.503208 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7x5f\" (UniqueName: \"kubernetes.io/projected/1d9fedfc-2539-44c3-9124-7b5c96af23da-kube-api-access-t7x5f\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.503434 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1d9fedfc-2539-44c3-9124-7b5c96af23da-etc-swift\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.503657 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-combined-ca-bundle\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.504149 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d9fedfc-2539-44c3-9124-7b5c96af23da-scripts\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.506126 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1d9fedfc-2539-44c3-9124-7b5c96af23da-etc-swift\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.506709 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1d9fedfc-2539-44c3-9124-7b5c96af23da-ring-data-devices\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.511881 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d9fedfc-2539-44c3-9124-7b5c96af23da-scripts\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.512113 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" podStartSLOduration=5.512101357 podStartE2EDuration="5.512101357s" podCreationTimestamp="2025-11-24 11:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:36:03.502545565 +0000 UTC m=+1174.433605204" watchObservedRunningTime="2025-11-24 11:36:03.512101357 +0000 UTC m=+1174.443160996" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.523134 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-dispersionconf\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.536016 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-swiftconf\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.536209 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-combined-ca-bundle\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.537322 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7x5f\" (UniqueName: \"kubernetes.io/projected/1d9fedfc-2539-44c3-9124-7b5c96af23da-kube-api-access-t7x5f\") pod \"swift-ring-rebalance-4wb58\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:03 crc kubenswrapper[4678]: I1124 11:36:03.637306 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:05 crc kubenswrapper[4678]: I1124 11:36:05.628007 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4wb58"] Nov 24 11:36:05 crc kubenswrapper[4678]: W1124 11:36:05.638846 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d9fedfc_2539_44c3_9124_7b5c96af23da.slice/crio-885b2709ecb47e2958b94f6218613ea3596faddf1413de53e43c9e1863619f55 WatchSource:0}: Error finding container 885b2709ecb47e2958b94f6218613ea3596faddf1413de53e43c9e1863619f55: Status 404 returned error can't find the container with id 885b2709ecb47e2958b94f6218613ea3596faddf1413de53e43c9e1863619f55 Nov 24 11:36:05 crc kubenswrapper[4678]: I1124 11:36:05.981359 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 24 11:36:05 crc kubenswrapper[4678]: I1124 11:36:05.981816 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 24 11:36:06 crc kubenswrapper[4678]: I1124 11:36:06.521654 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9cbb9f62-41c9-4c77-b572-e14fb76a8b45","Type":"ContainerStarted","Data":"f5da91e9fa8e2f12914edf5e60dbaad007d44188fbb24730bbf57f2b691566c8"} Nov 24 11:36:06 crc kubenswrapper[4678]: I1124 11:36:06.526513 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8f4675f4-74be-4f56-a3a6-d7e6aea34614","Type":"ContainerStarted","Data":"e4cbaefd61132e3fb9f96201f91c05f5894be37564f69aef53c6594dd840f041"} Nov 24 11:36:06 crc kubenswrapper[4678]: I1124 11:36:06.528162 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4wb58" event={"ID":"1d9fedfc-2539-44c3-9124-7b5c96af23da","Type":"ContainerStarted","Data":"885b2709ecb47e2958b94f6218613ea3596faddf1413de53e43c9e1863619f55"} Nov 24 11:36:06 crc kubenswrapper[4678]: I1124 11:36:06.530724 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"78965549-7245-45c4-a523-132073321076","Type":"ContainerStarted","Data":"4cb82d2856b5ee3dab3b2d990ac6f138dc07d797f8bec4de5cd27588e0140a7c"} Nov 24 11:36:06 crc kubenswrapper[4678]: I1124 11:36:06.549447 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.461330484 podStartE2EDuration="33.549386123s" podCreationTimestamp="2025-11-24 11:35:33 +0000 UTC" firstStartedPulling="2025-11-24 11:35:44.082728608 +0000 UTC m=+1155.013788247" lastFinishedPulling="2025-11-24 11:36:05.170784247 +0000 UTC m=+1176.101843886" observedRunningTime="2025-11-24 11:36:06.54212808 +0000 UTC m=+1177.473187729" watchObservedRunningTime="2025-11-24 11:36:06.549386123 +0000 UTC m=+1177.480445762" Nov 24 11:36:06 crc kubenswrapper[4678]: I1124 11:36:06.581476 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.937550118 podStartE2EDuration="35.581453842s" podCreationTimestamp="2025-11-24 11:35:31 +0000 UTC" firstStartedPulling="2025-11-24 11:35:43.553804146 +0000 UTC m=+1154.484863785" lastFinishedPulling="2025-11-24 11:36:05.19770787 +0000 UTC m=+1176.128767509" observedRunningTime="2025-11-24 11:36:06.572914005 +0000 UTC m=+1177.503973654" watchObservedRunningTime="2025-11-24 11:36:06.581453842 +0000 UTC m=+1177.512513481" Nov 24 11:36:06 crc kubenswrapper[4678]: I1124 11:36:06.598494 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=30.481766667 podStartE2EDuration="43.598478613s" podCreationTimestamp="2025-11-24 11:35:23 +0000 UTC" firstStartedPulling="2025-11-24 11:35:41.024529379 +0000 UTC m=+1151.955589028" lastFinishedPulling="2025-11-24 11:35:54.141241335 +0000 UTC m=+1165.072300974" observedRunningTime="2025-11-24 11:36:06.595191065 +0000 UTC m=+1177.526250714" watchObservedRunningTime="2025-11-24 11:36:06.598478613 +0000 UTC m=+1177.529538252" Nov 24 11:36:07 crc kubenswrapper[4678]: I1124 11:36:07.492903 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:36:07 crc kubenswrapper[4678]: E1124 11:36:07.493138 4678 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 11:36:07 crc kubenswrapper[4678]: E1124 11:36:07.493413 4678 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 11:36:07 crc kubenswrapper[4678]: E1124 11:36:07.493491 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift podName:1a7a4a62-9baa-4df8-ba83-688dc6817249 nodeName:}" failed. No retries permitted until 2025-11-24 11:36:15.493466015 +0000 UTC m=+1186.424525644 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift") pod "swift-storage-0" (UID: "1a7a4a62-9baa-4df8-ba83-688dc6817249") : configmap "swift-ring-files" not found Nov 24 11:36:07 crc kubenswrapper[4678]: I1124 11:36:07.678133 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 24 11:36:08 crc kubenswrapper[4678]: I1124 11:36:08.123140 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 11:36:08 crc kubenswrapper[4678]: I1124 11:36:08.227576 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 24 11:36:08 crc kubenswrapper[4678]: I1124 11:36:08.312370 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 24 11:36:08 crc kubenswrapper[4678]: I1124 11:36:08.319320 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 24 11:36:08 crc kubenswrapper[4678]: I1124 11:36:08.363567 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 24 11:36:08 crc kubenswrapper[4678]: I1124 11:36:08.570264 4678 generic.go:334] "Generic (PLEG): container finished" podID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerID="9379a259a4a78313d7d9ff5af56185af8a366751b747318d10a88f696dab3fed" exitCode=0 Nov 24 11:36:08 crc kubenswrapper[4678]: I1124 11:36:08.571172 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f794d99b-6371-445e-9bb9-74f0bdbee6bc","Type":"ContainerDied","Data":"9379a259a4a78313d7d9ff5af56185af8a366751b747318d10a88f696dab3fed"} Nov 24 11:36:08 crc kubenswrapper[4678]: I1124 11:36:08.571745 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 24 11:36:08 crc kubenswrapper[4678]: I1124 11:36:08.623622 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 24 11:36:08 crc kubenswrapper[4678]: I1124 11:36:08.682122 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 24 11:36:08 crc kubenswrapper[4678]: I1124 11:36:08.731813 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" Nov 24 11:36:08 crc kubenswrapper[4678]: I1124 11:36:08.848853 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xtq8d"] Nov 24 11:36:08 crc kubenswrapper[4678]: I1124 11:36:08.849121 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" podUID="fcc92e56-646f-4646-817a-cea16263dc09" containerName="dnsmasq-dns" containerID="cri-o://f9f802160f5a83e501e9f415d23c26d959ec6b4be0052edd2bee8565affc49a0" gracePeriod=10 Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.024812 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-dthhm"] Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.027050 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.036188 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.038118 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-dthhm"] Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.054043 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-dthhm\" (UID: \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\") " pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.054115 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwgg8\" (UniqueName: \"kubernetes.io/projected/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-kube-api-access-vwgg8\") pod \"dnsmasq-dns-6c89d5d749-dthhm\" (UID: \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\") " pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.054197 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-dthhm\" (UID: \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\") " pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.054241 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-config\") pod \"dnsmasq-dns-6c89d5d749-dthhm\" (UID: \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\") " pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.056005 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.162924 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-dthhm\" (UID: \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\") " pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.163829 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-dthhm\" (UID: \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\") " pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.163931 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwgg8\" (UniqueName: \"kubernetes.io/projected/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-kube-api-access-vwgg8\") pod \"dnsmasq-dns-6c89d5d749-dthhm\" (UID: \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\") " pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.163999 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-dthhm\" (UID: \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\") " pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.164106 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-config\") pod \"dnsmasq-dns-6c89d5d749-dthhm\" (UID: \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\") " pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.165053 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-config\") pod \"dnsmasq-dns-6c89d5d749-dthhm\" (UID: \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\") " pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.169921 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-dthhm\" (UID: \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\") " pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.189018 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwgg8\" (UniqueName: \"kubernetes.io/projected/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-kube-api-access-vwgg8\") pod \"dnsmasq-dns-6c89d5d749-dthhm\" (UID: \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\") " pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.231529 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vbxmj"] Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.233061 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.236507 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.251735 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vbxmj"] Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.362441 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.374231 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfcb7171-feaa-413b-a0af-e4adf0bef864-config\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.374437 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c5k4\" (UniqueName: \"kubernetes.io/projected/bfcb7171-feaa-413b-a0af-e4adf0bef864-kube-api-access-8c5k4\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.374679 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bfcb7171-feaa-413b-a0af-e4adf0bef864-ovs-rundir\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.374711 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfcb7171-feaa-413b-a0af-e4adf0bef864-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.374744 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcb7171-feaa-413b-a0af-e4adf0bef864-combined-ca-bundle\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.374767 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bfcb7171-feaa-413b-a0af-e4adf0bef864-ovn-rundir\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.477444 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c5k4\" (UniqueName: \"kubernetes.io/projected/bfcb7171-feaa-413b-a0af-e4adf0bef864-kube-api-access-8c5k4\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.477543 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bfcb7171-feaa-413b-a0af-e4adf0bef864-ovs-rundir\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.477561 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfcb7171-feaa-413b-a0af-e4adf0bef864-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.477582 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcb7171-feaa-413b-a0af-e4adf0bef864-combined-ca-bundle\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.477620 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bfcb7171-feaa-413b-a0af-e4adf0bef864-ovn-rundir\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.477929 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bfcb7171-feaa-413b-a0af-e4adf0bef864-ovs-rundir\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.478041 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bfcb7171-feaa-413b-a0af-e4adf0bef864-ovn-rundir\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.478113 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfcb7171-feaa-413b-a0af-e4adf0bef864-config\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.478883 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfcb7171-feaa-413b-a0af-e4adf0bef864-config\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.481288 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcb7171-feaa-413b-a0af-e4adf0bef864-combined-ca-bundle\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.490303 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfcb7171-feaa-413b-a0af-e4adf0bef864-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.505455 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c5k4\" (UniqueName: \"kubernetes.io/projected/bfcb7171-feaa-413b-a0af-e4adf0bef864-kube-api-access-8c5k4\") pod \"ovn-controller-metrics-vbxmj\" (UID: \"bfcb7171-feaa-413b-a0af-e4adf0bef864\") " pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.513721 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-dthhm"] Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.534634 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-2kwbz"] Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.558117 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2kwbz"] Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.558256 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.560530 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vbxmj" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.569390 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.603435 4678 generic.go:334] "Generic (PLEG): container finished" podID="fcc92e56-646f-4646-817a-cea16263dc09" containerID="f9f802160f5a83e501e9f415d23c26d959ec6b4be0052edd2bee8565affc49a0" exitCode=0 Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.603558 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" event={"ID":"fcc92e56-646f-4646-817a-cea16263dc09","Type":"ContainerDied","Data":"f9f802160f5a83e501e9f415d23c26d959ec6b4be0052edd2bee8565affc49a0"} Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.651985 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.683056 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-dns-svc\") pod \"dnsmasq-dns-698758b865-2kwbz\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.683186 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-2kwbz\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.683240 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-2kwbz\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.683381 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-config\") pod \"dnsmasq-dns-698758b865-2kwbz\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.683459 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjdbp\" (UniqueName: \"kubernetes.io/projected/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-kube-api-access-gjdbp\") pod \"dnsmasq-dns-698758b865-2kwbz\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.788119 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-config\") pod \"dnsmasq-dns-698758b865-2kwbz\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.788203 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjdbp\" (UniqueName: \"kubernetes.io/projected/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-kube-api-access-gjdbp\") pod \"dnsmasq-dns-698758b865-2kwbz\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.788302 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-dns-svc\") pod \"dnsmasq-dns-698758b865-2kwbz\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.789282 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-config\") pod \"dnsmasq-dns-698758b865-2kwbz\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.789316 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-dns-svc\") pod \"dnsmasq-dns-698758b865-2kwbz\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.788390 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-2kwbz\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.789415 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-2kwbz\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.789429 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-2kwbz\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.790044 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-2kwbz\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.805498 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjdbp\" (UniqueName: \"kubernetes.io/projected/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-kube-api-access-gjdbp\") pod \"dnsmasq-dns-698758b865-2kwbz\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.887796 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.985032 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.990127 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.993375 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.993422 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-bcxsx" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.993531 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 24 11:36:09 crc kubenswrapper[4678]: I1124 11:36:09.994806 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.021566 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.097497 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc293cb3-7b1d-4102-b9c3-65e58516ec79-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.097554 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dc293cb3-7b1d-4102-b9c3-65e58516ec79-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.097627 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc293cb3-7b1d-4102-b9c3-65e58516ec79-config\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.097711 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc293cb3-7b1d-4102-b9c3-65e58516ec79-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.097774 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc293cb3-7b1d-4102-b9c3-65e58516ec79-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.098019 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjs9c\" (UniqueName: \"kubernetes.io/projected/dc293cb3-7b1d-4102-b9c3-65e58516ec79-kube-api-access-xjs9c\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.098150 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc293cb3-7b1d-4102-b9c3-65e58516ec79-scripts\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.200311 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc293cb3-7b1d-4102-b9c3-65e58516ec79-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.200707 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dc293cb3-7b1d-4102-b9c3-65e58516ec79-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.200740 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc293cb3-7b1d-4102-b9c3-65e58516ec79-config\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.200792 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc293cb3-7b1d-4102-b9c3-65e58516ec79-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.200835 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc293cb3-7b1d-4102-b9c3-65e58516ec79-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.200919 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjs9c\" (UniqueName: \"kubernetes.io/projected/dc293cb3-7b1d-4102-b9c3-65e58516ec79-kube-api-access-xjs9c\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.200962 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc293cb3-7b1d-4102-b9c3-65e58516ec79-scripts\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.202329 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc293cb3-7b1d-4102-b9c3-65e58516ec79-scripts\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.204661 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dc293cb3-7b1d-4102-b9c3-65e58516ec79-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.204933 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc293cb3-7b1d-4102-b9c3-65e58516ec79-config\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.207301 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc293cb3-7b1d-4102-b9c3-65e58516ec79-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.209201 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc293cb3-7b1d-4102-b9c3-65e58516ec79-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.218715 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc293cb3-7b1d-4102-b9c3-65e58516ec79-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.241588 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjs9c\" (UniqueName: \"kubernetes.io/projected/dc293cb3-7b1d-4102-b9c3-65e58516ec79-kube-api-access-xjs9c\") pod \"ovn-northd-0\" (UID: \"dc293cb3-7b1d-4102-b9c3-65e58516ec79\") " pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.333389 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.623162 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4wb58" event={"ID":"1d9fedfc-2539-44c3-9124-7b5c96af23da","Type":"ContainerStarted","Data":"c7deb1ace3b5f56387e12c577ede205cdeb37697a2ea5f29ecc8a9266e3b47b5"} Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.738921 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-4wb58" podStartSLOduration=3.212281469 podStartE2EDuration="7.738902721s" podCreationTimestamp="2025-11-24 11:36:03 +0000 UTC" firstStartedPulling="2025-11-24 11:36:05.644149849 +0000 UTC m=+1176.575209488" lastFinishedPulling="2025-11-24 11:36:10.170771101 +0000 UTC m=+1181.101830740" observedRunningTime="2025-11-24 11:36:10.69618403 +0000 UTC m=+1181.627243669" watchObservedRunningTime="2025-11-24 11:36:10.738902721 +0000 UTC m=+1181.669962360" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.741809 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vbxmj"] Nov 24 11:36:10 crc kubenswrapper[4678]: W1124 11:36:10.797073 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfcb7171_feaa_413b_a0af_e4adf0bef864.slice/crio-7ceae17c9d13a93b3824c3d2018f8a8839831054c0fdc62677bcc7521c73fa0a WatchSource:0}: Error finding container 7ceae17c9d13a93b3824c3d2018f8a8839831054c0fdc62677bcc7521c73fa0a: Status 404 returned error can't find the container with id 7ceae17c9d13a93b3824c3d2018f8a8839831054c0fdc62677bcc7521c73fa0a Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.889816 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.926926 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc92e56-646f-4646-817a-cea16263dc09-config\") pod \"fcc92e56-646f-4646-817a-cea16263dc09\" (UID: \"fcc92e56-646f-4646-817a-cea16263dc09\") " Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.927266 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcc92e56-646f-4646-817a-cea16263dc09-dns-svc\") pod \"fcc92e56-646f-4646-817a-cea16263dc09\" (UID: \"fcc92e56-646f-4646-817a-cea16263dc09\") " Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.927516 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68rq9\" (UniqueName: \"kubernetes.io/projected/fcc92e56-646f-4646-817a-cea16263dc09-kube-api-access-68rq9\") pod \"fcc92e56-646f-4646-817a-cea16263dc09\" (UID: \"fcc92e56-646f-4646-817a-cea16263dc09\") " Nov 24 11:36:10 crc kubenswrapper[4678]: I1124 11:36:10.947848 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc92e56-646f-4646-817a-cea16263dc09-kube-api-access-68rq9" (OuterVolumeSpecName: "kube-api-access-68rq9") pod "fcc92e56-646f-4646-817a-cea16263dc09" (UID: "fcc92e56-646f-4646-817a-cea16263dc09"). InnerVolumeSpecName "kube-api-access-68rq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.030565 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68rq9\" (UniqueName: \"kubernetes.io/projected/fcc92e56-646f-4646-817a-cea16263dc09-kube-api-access-68rq9\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.085466 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc92e56-646f-4646-817a-cea16263dc09-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fcc92e56-646f-4646-817a-cea16263dc09" (UID: "fcc92e56-646f-4646-817a-cea16263dc09"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.085545 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc92e56-646f-4646-817a-cea16263dc09-config" (OuterVolumeSpecName: "config") pod "fcc92e56-646f-4646-817a-cea16263dc09" (UID: "fcc92e56-646f-4646-817a-cea16263dc09"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.133092 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc92e56-646f-4646-817a-cea16263dc09-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.133123 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcc92e56-646f-4646-817a-cea16263dc09-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.376569 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2kwbz"] Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.394635 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-dthhm"] Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.412299 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 11:36:11 crc kubenswrapper[4678]: W1124 11:36:11.430886 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc293cb3_7b1d_4102_b9c3_65e58516ec79.slice/crio-29a1e4d8d0cfd16b82b5028b61e0e32e49e3738b6ad9ff0f65df5c8c3b6143a4 WatchSource:0}: Error finding container 29a1e4d8d0cfd16b82b5028b61e0e32e49e3738b6ad9ff0f65df5c8c3b6143a4: Status 404 returned error can't find the container with id 29a1e4d8d0cfd16b82b5028b61e0e32e49e3738b6ad9ff0f65df5c8c3b6143a4 Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.662815 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" event={"ID":"fcc92e56-646f-4646-817a-cea16263dc09","Type":"ContainerDied","Data":"74cbae662eba59586f80a5f83d9737686777ab49f91fce7e7dc5a4d930c91b3a"} Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.662883 4678 scope.go:117] "RemoveContainer" containerID="f9f802160f5a83e501e9f415d23c26d959ec6b4be0052edd2bee8565affc49a0" Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.663041 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xtq8d" Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.678797 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" event={"ID":"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d","Type":"ContainerStarted","Data":"13238272425ea7790476f1dccd23d891f298e0843ef4b582f0bf64997f7a0e6e"} Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.685432 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2kwbz" event={"ID":"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6","Type":"ContainerStarted","Data":"ad5ee14efd5a9876c0961cc664cb0c32d1d66598ce8c191d90a4beb8572b2e9f"} Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.688908 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"dc293cb3-7b1d-4102-b9c3-65e58516ec79","Type":"ContainerStarted","Data":"29a1e4d8d0cfd16b82b5028b61e0e32e49e3738b6ad9ff0f65df5c8c3b6143a4"} Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.693805 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vbxmj" event={"ID":"bfcb7171-feaa-413b-a0af-e4adf0bef864","Type":"ContainerStarted","Data":"81288cfcf49553c897723bfb4be4f92be50f4705333b465d3fbffd12500b3290"} Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.693855 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vbxmj" event={"ID":"bfcb7171-feaa-413b-a0af-e4adf0bef864","Type":"ContainerStarted","Data":"7ceae17c9d13a93b3824c3d2018f8a8839831054c0fdc62677bcc7521c73fa0a"} Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.724171 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xtq8d"] Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.724375 4678 scope.go:117] "RemoveContainer" containerID="539ee93eecf88d77b7add509e9eb56e012668944a04af235b44d5404d8dfe580" Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.742244 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xtq8d"] Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.743763 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vbxmj" podStartSLOduration=2.7437418019999997 podStartE2EDuration="2.743741802s" podCreationTimestamp="2025-11-24 11:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:36:11.733682705 +0000 UTC m=+1182.664742344" watchObservedRunningTime="2025-11-24 11:36:11.743741802 +0000 UTC m=+1182.674801441" Nov 24 11:36:11 crc kubenswrapper[4678]: I1124 11:36:11.928790 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc92e56-646f-4646-817a-cea16263dc09" path="/var/lib/kubelet/pods/fcc92e56-646f-4646-817a-cea16263dc09/volumes" Nov 24 11:36:12 crc kubenswrapper[4678]: I1124 11:36:12.712251 4678 generic.go:334] "Generic (PLEG): container finished" podID="d1b32f71-b898-4cd7-8aea-aaa4dc76b80d" containerID="4653ca593da88f2c84b8341a720d677a6da037ff33f8c4fc379fe42f6a876a97" exitCode=0 Nov 24 11:36:12 crc kubenswrapper[4678]: I1124 11:36:12.712714 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" event={"ID":"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d","Type":"ContainerDied","Data":"4653ca593da88f2c84b8341a720d677a6da037ff33f8c4fc379fe42f6a876a97"} Nov 24 11:36:12 crc kubenswrapper[4678]: I1124 11:36:12.717469 4678 generic.go:334] "Generic (PLEG): container finished" podID="8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" containerID="475073f8bc92d3951aab31de77fb078ec053140578be6a4a92c6582beac1e810" exitCode=0 Nov 24 11:36:12 crc kubenswrapper[4678]: I1124 11:36:12.718737 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2kwbz" event={"ID":"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6","Type":"ContainerDied","Data":"475073f8bc92d3951aab31de77fb078ec053140578be6a4a92c6582beac1e810"} Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.151961 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.292359 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwgg8\" (UniqueName: \"kubernetes.io/projected/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-kube-api-access-vwgg8\") pod \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\" (UID: \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\") " Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.293003 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-dns-svc\") pod \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\" (UID: \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\") " Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.293390 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-ovsdbserver-sb\") pod \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\" (UID: \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\") " Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.293483 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-config\") pod \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\" (UID: \"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d\") " Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.299149 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-kube-api-access-vwgg8" (OuterVolumeSpecName: "kube-api-access-vwgg8") pod "d1b32f71-b898-4cd7-8aea-aaa4dc76b80d" (UID: "d1b32f71-b898-4cd7-8aea-aaa4dc76b80d"). InnerVolumeSpecName "kube-api-access-vwgg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.337772 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-config" (OuterVolumeSpecName: "config") pod "d1b32f71-b898-4cd7-8aea-aaa4dc76b80d" (UID: "d1b32f71-b898-4cd7-8aea-aaa4dc76b80d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.346965 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d1b32f71-b898-4cd7-8aea-aaa4dc76b80d" (UID: "d1b32f71-b898-4cd7-8aea-aaa4dc76b80d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.357107 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1b32f71-b898-4cd7-8aea-aaa4dc76b80d" (UID: "d1b32f71-b898-4cd7-8aea-aaa4dc76b80d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.398425 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.398467 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.398480 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.398515 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwgg8\" (UniqueName: \"kubernetes.io/projected/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d-kube-api-access-vwgg8\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.738276 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2kwbz" event={"ID":"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6","Type":"ContainerStarted","Data":"05b090a80272c7a581a95ec04f56e7913a69e197646b82568217918fd9ece808"} Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.739928 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.755469 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"dc293cb3-7b1d-4102-b9c3-65e58516ec79","Type":"ContainerStarted","Data":"fb261ac23ca3dd6eefe8f2199077bd754e1f688238d6c45c508f1ba712d2770c"} Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.755532 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"dc293cb3-7b1d-4102-b9c3-65e58516ec79","Type":"ContainerStarted","Data":"9d500cc4984dcac9841ca833f07914412bf8a72620ae0561595dff2d72d3e713"} Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.756695 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.772387 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" event={"ID":"d1b32f71-b898-4cd7-8aea-aaa4dc76b80d","Type":"ContainerDied","Data":"13238272425ea7790476f1dccd23d891f298e0843ef4b582f0bf64997f7a0e6e"} Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.772454 4678 scope.go:117] "RemoveContainer" containerID="4653ca593da88f2c84b8341a720d677a6da037ff33f8c4fc379fe42f6a876a97" Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.772633 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-dthhm" Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.788328 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-2kwbz" podStartSLOduration=4.788307317 podStartE2EDuration="4.788307317s" podCreationTimestamp="2025-11-24 11:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:36:13.776143555 +0000 UTC m=+1184.707203204" watchObservedRunningTime="2025-11-24 11:36:13.788307317 +0000 UTC m=+1184.719366966" Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.921473 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.559660792 podStartE2EDuration="4.921457382s" podCreationTimestamp="2025-11-24 11:36:09 +0000 UTC" firstStartedPulling="2025-11-24 11:36:11.435944384 +0000 UTC m=+1182.367004023" lastFinishedPulling="2025-11-24 11:36:12.797740974 +0000 UTC m=+1183.728800613" observedRunningTime="2025-11-24 11:36:13.831144141 +0000 UTC m=+1184.762203780" watchObservedRunningTime="2025-11-24 11:36:13.921457382 +0000 UTC m=+1184.852517021" Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.935466 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-dthhm"] Nov 24 11:36:13 crc kubenswrapper[4678]: I1124 11:36:13.944121 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-dthhm"] Nov 24 11:36:14 crc kubenswrapper[4678]: I1124 11:36:14.542044 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 24 11:36:14 crc kubenswrapper[4678]: I1124 11:36:14.542335 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 24 11:36:14 crc kubenswrapper[4678]: I1124 11:36:14.643795 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 24 11:36:14 crc kubenswrapper[4678]: I1124 11:36:14.888460 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.203497 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5b6d66f75b-9j4v9" podUID="d91b5ecf-edd7-4914-b8d0-4dbae32548f6" containerName="console" containerID="cri-o://f3accefc14b1fca3e456d3e93b22c172eacc395613fb3dc30dc00b8b3764a51f" gracePeriod=15 Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.573604 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:36:15 crc kubenswrapper[4678]: E1124 11:36:15.574125 4678 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 11:36:15 crc kubenswrapper[4678]: E1124 11:36:15.574142 4678 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 11:36:15 crc kubenswrapper[4678]: E1124 11:36:15.574187 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift podName:1a7a4a62-9baa-4df8-ba83-688dc6817249 nodeName:}" failed. No retries permitted until 2025-11-24 11:36:31.574172204 +0000 UTC m=+1202.505231833 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift") pod "swift-storage-0" (UID: "1a7a4a62-9baa-4df8-ba83-688dc6817249") : configmap "swift-ring-files" not found Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.796982 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b6d66f75b-9j4v9_d91b5ecf-edd7-4914-b8d0-4dbae32548f6/console/0.log" Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.797044 4678 generic.go:334] "Generic (PLEG): container finished" podID="d91b5ecf-edd7-4914-b8d0-4dbae32548f6" containerID="f3accefc14b1fca3e456d3e93b22c172eacc395613fb3dc30dc00b8b3764a51f" exitCode=2 Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.797196 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b6d66f75b-9j4v9" event={"ID":"d91b5ecf-edd7-4914-b8d0-4dbae32548f6","Type":"ContainerDied","Data":"f3accefc14b1fca3e456d3e93b22c172eacc395613fb3dc30dc00b8b3764a51f"} Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.921280 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b32f71-b898-4cd7-8aea-aaa4dc76b80d" path="/var/lib/kubelet/pods/d1b32f71-b898-4cd7-8aea-aaa4dc76b80d/volumes" Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.922338 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-g8hdr"] Nov 24 11:36:15 crc kubenswrapper[4678]: E1124 11:36:15.922825 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc92e56-646f-4646-817a-cea16263dc09" containerName="dnsmasq-dns" Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.922856 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc92e56-646f-4646-817a-cea16263dc09" containerName="dnsmasq-dns" Nov 24 11:36:15 crc kubenswrapper[4678]: E1124 11:36:15.922922 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b32f71-b898-4cd7-8aea-aaa4dc76b80d" containerName="init" Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.922932 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b32f71-b898-4cd7-8aea-aaa4dc76b80d" containerName="init" Nov 24 11:36:15 crc kubenswrapper[4678]: E1124 11:36:15.922943 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc92e56-646f-4646-817a-cea16263dc09" containerName="init" Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.922949 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc92e56-646f-4646-817a-cea16263dc09" containerName="init" Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.923206 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc92e56-646f-4646-817a-cea16263dc09" containerName="dnsmasq-dns" Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.923234 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b32f71-b898-4cd7-8aea-aaa4dc76b80d" containerName="init" Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.924286 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-g8hdr"] Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.924409 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g8hdr" Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.933283 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8dce-account-create-k6d7z"] Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.934959 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8dce-account-create-k6d7z" Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.937981 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 24 11:36:15 crc kubenswrapper[4678]: I1124 11:36:15.945010 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8dce-account-create-k6d7z"] Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.056565 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-5vx7g"] Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.058501 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5vx7g" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.070046 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5vx7g"] Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.085183 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgtrx\" (UniqueName: \"kubernetes.io/projected/e68cf86b-0798-4155-ba4c-dfc5ef2698cc-kube-api-access-sgtrx\") pod \"keystone-db-create-g8hdr\" (UID: \"e68cf86b-0798-4155-ba4c-dfc5ef2698cc\") " pod="openstack/keystone-db-create-g8hdr" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.085233 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shs2q\" (UniqueName: \"kubernetes.io/projected/eedffe7d-12cf-4276-b084-e121838c576d-kube-api-access-shs2q\") pod \"keystone-8dce-account-create-k6d7z\" (UID: \"eedffe7d-12cf-4276-b084-e121838c576d\") " pod="openstack/keystone-8dce-account-create-k6d7z" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.085286 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eedffe7d-12cf-4276-b084-e121838c576d-operator-scripts\") pod \"keystone-8dce-account-create-k6d7z\" (UID: \"eedffe7d-12cf-4276-b084-e121838c576d\") " pod="openstack/keystone-8dce-account-create-k6d7z" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.085341 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e68cf86b-0798-4155-ba4c-dfc5ef2698cc-operator-scripts\") pod \"keystone-db-create-g8hdr\" (UID: \"e68cf86b-0798-4155-ba4c-dfc5ef2698cc\") " pod="openstack/keystone-db-create-g8hdr" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.165769 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3978-account-create-gsvfr"] Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.167556 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3978-account-create-gsvfr" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.173831 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.185496 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3978-account-create-gsvfr"] Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.197268 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eedffe7d-12cf-4276-b084-e121838c576d-operator-scripts\") pod \"keystone-8dce-account-create-k6d7z\" (UID: \"eedffe7d-12cf-4276-b084-e121838c576d\") " pod="openstack/keystone-8dce-account-create-k6d7z" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.197496 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e68cf86b-0798-4155-ba4c-dfc5ef2698cc-operator-scripts\") pod \"keystone-db-create-g8hdr\" (UID: \"e68cf86b-0798-4155-ba4c-dfc5ef2698cc\") " pod="openstack/keystone-db-create-g8hdr" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.197795 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec3b0873-a45a-4311-a6e9-8f0dc4d031b8-operator-scripts\") pod \"placement-db-create-5vx7g\" (UID: \"ec3b0873-a45a-4311-a6e9-8f0dc4d031b8\") " pod="openstack/placement-db-create-5vx7g" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.197887 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnxdc\" (UniqueName: \"kubernetes.io/projected/ec3b0873-a45a-4311-a6e9-8f0dc4d031b8-kube-api-access-gnxdc\") pod \"placement-db-create-5vx7g\" (UID: \"ec3b0873-a45a-4311-a6e9-8f0dc4d031b8\") " pod="openstack/placement-db-create-5vx7g" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.197939 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgtrx\" (UniqueName: \"kubernetes.io/projected/e68cf86b-0798-4155-ba4c-dfc5ef2698cc-kube-api-access-sgtrx\") pod \"keystone-db-create-g8hdr\" (UID: \"e68cf86b-0798-4155-ba4c-dfc5ef2698cc\") " pod="openstack/keystone-db-create-g8hdr" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.197972 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shs2q\" (UniqueName: \"kubernetes.io/projected/eedffe7d-12cf-4276-b084-e121838c576d-kube-api-access-shs2q\") pod \"keystone-8dce-account-create-k6d7z\" (UID: \"eedffe7d-12cf-4276-b084-e121838c576d\") " pod="openstack/keystone-8dce-account-create-k6d7z" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.198064 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eedffe7d-12cf-4276-b084-e121838c576d-operator-scripts\") pod \"keystone-8dce-account-create-k6d7z\" (UID: \"eedffe7d-12cf-4276-b084-e121838c576d\") " pod="openstack/keystone-8dce-account-create-k6d7z" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.198606 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e68cf86b-0798-4155-ba4c-dfc5ef2698cc-operator-scripts\") pod \"keystone-db-create-g8hdr\" (UID: \"e68cf86b-0798-4155-ba4c-dfc5ef2698cc\") " pod="openstack/keystone-db-create-g8hdr" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.224836 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shs2q\" (UniqueName: \"kubernetes.io/projected/eedffe7d-12cf-4276-b084-e121838c576d-kube-api-access-shs2q\") pod \"keystone-8dce-account-create-k6d7z\" (UID: \"eedffe7d-12cf-4276-b084-e121838c576d\") " pod="openstack/keystone-8dce-account-create-k6d7z" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.229022 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgtrx\" (UniqueName: \"kubernetes.io/projected/e68cf86b-0798-4155-ba4c-dfc5ef2698cc-kube-api-access-sgtrx\") pod \"keystone-db-create-g8hdr\" (UID: \"e68cf86b-0798-4155-ba4c-dfc5ef2698cc\") " pod="openstack/keystone-db-create-g8hdr" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.259222 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g8hdr" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.271599 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8dce-account-create-k6d7z" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.299944 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4wkz\" (UniqueName: \"kubernetes.io/projected/700ed725-dec9-4b2c-873c-82075bbcd721-kube-api-access-z4wkz\") pod \"placement-3978-account-create-gsvfr\" (UID: \"700ed725-dec9-4b2c-873c-82075bbcd721\") " pod="openstack/placement-3978-account-create-gsvfr" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.300417 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/700ed725-dec9-4b2c-873c-82075bbcd721-operator-scripts\") pod \"placement-3978-account-create-gsvfr\" (UID: \"700ed725-dec9-4b2c-873c-82075bbcd721\") " pod="openstack/placement-3978-account-create-gsvfr" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.300611 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec3b0873-a45a-4311-a6e9-8f0dc4d031b8-operator-scripts\") pod \"placement-db-create-5vx7g\" (UID: \"ec3b0873-a45a-4311-a6e9-8f0dc4d031b8\") " pod="openstack/placement-db-create-5vx7g" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.300889 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnxdc\" (UniqueName: \"kubernetes.io/projected/ec3b0873-a45a-4311-a6e9-8f0dc4d031b8-kube-api-access-gnxdc\") pod \"placement-db-create-5vx7g\" (UID: \"ec3b0873-a45a-4311-a6e9-8f0dc4d031b8\") " pod="openstack/placement-db-create-5vx7g" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.301749 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec3b0873-a45a-4311-a6e9-8f0dc4d031b8-operator-scripts\") pod \"placement-db-create-5vx7g\" (UID: \"ec3b0873-a45a-4311-a6e9-8f0dc4d031b8\") " pod="openstack/placement-db-create-5vx7g" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.319189 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnxdc\" (UniqueName: \"kubernetes.io/projected/ec3b0873-a45a-4311-a6e9-8f0dc4d031b8-kube-api-access-gnxdc\") pod \"placement-db-create-5vx7g\" (UID: \"ec3b0873-a45a-4311-a6e9-8f0dc4d031b8\") " pod="openstack/placement-db-create-5vx7g" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.386714 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5vx7g" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.402863 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4wkz\" (UniqueName: \"kubernetes.io/projected/700ed725-dec9-4b2c-873c-82075bbcd721-kube-api-access-z4wkz\") pod \"placement-3978-account-create-gsvfr\" (UID: \"700ed725-dec9-4b2c-873c-82075bbcd721\") " pod="openstack/placement-3978-account-create-gsvfr" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.403011 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/700ed725-dec9-4b2c-873c-82075bbcd721-operator-scripts\") pod \"placement-3978-account-create-gsvfr\" (UID: \"700ed725-dec9-4b2c-873c-82075bbcd721\") " pod="openstack/placement-3978-account-create-gsvfr" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.404042 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/700ed725-dec9-4b2c-873c-82075bbcd721-operator-scripts\") pod \"placement-3978-account-create-gsvfr\" (UID: \"700ed725-dec9-4b2c-873c-82075bbcd721\") " pod="openstack/placement-3978-account-create-gsvfr" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.420832 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4wkz\" (UniqueName: \"kubernetes.io/projected/700ed725-dec9-4b2c-873c-82075bbcd721-kube-api-access-z4wkz\") pod \"placement-3978-account-create-gsvfr\" (UID: \"700ed725-dec9-4b2c-873c-82075bbcd721\") " pod="openstack/placement-3978-account-create-gsvfr" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.495848 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3978-account-create-gsvfr" Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.807895 4678 generic.go:334] "Generic (PLEG): container finished" podID="728e8f13-52c5-4b48-9fff-8053732311b9" containerID="3a7b5ef4c4fa5ee85ae38f98dba7ea094ecd28d33191e8a701dfe02bc4368e70" exitCode=0 Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.807973 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"728e8f13-52c5-4b48-9fff-8053732311b9","Type":"ContainerDied","Data":"3a7b5ef4c4fa5ee85ae38f98dba7ea094ecd28d33191e8a701dfe02bc4368e70"} Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.814186 4678 generic.go:334] "Generic (PLEG): container finished" podID="5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" containerID="78a42a92af69cea2096a817c36fa21b3dd0f79b6d7fef3c6e4842c308a764028" exitCode=0 Nov 24 11:36:16 crc kubenswrapper[4678]: I1124 11:36:16.814233 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6","Type":"ContainerDied","Data":"78a42a92af69cea2096a817c36fa21b3dd0f79b6d7fef3c6e4842c308a764028"} Nov 24 11:36:17 crc kubenswrapper[4678]: I1124 11:36:17.893723 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-bwdh4"] Nov 24 11:36:17 crc kubenswrapper[4678]: I1124 11:36:17.899280 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-bwdh4" Nov 24 11:36:17 crc kubenswrapper[4678]: I1124 11:36:17.916364 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-bwdh4"] Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.042414 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k88hq\" (UniqueName: \"kubernetes.io/projected/91ab28a9-6ee0-4a76-ae5f-c4b27521125d-kube-api-access-k88hq\") pod \"mysqld-exporter-openstack-db-create-bwdh4\" (UID: \"91ab28a9-6ee0-4a76-ae5f-c4b27521125d\") " pod="openstack/mysqld-exporter-openstack-db-create-bwdh4" Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.043208 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91ab28a9-6ee0-4a76-ae5f-c4b27521125d-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-bwdh4\" (UID: \"91ab28a9-6ee0-4a76-ae5f-c4b27521125d\") " pod="openstack/mysqld-exporter-openstack-db-create-bwdh4" Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.145955 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91ab28a9-6ee0-4a76-ae5f-c4b27521125d-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-bwdh4\" (UID: \"91ab28a9-6ee0-4a76-ae5f-c4b27521125d\") " pod="openstack/mysqld-exporter-openstack-db-create-bwdh4" Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.146077 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k88hq\" (UniqueName: \"kubernetes.io/projected/91ab28a9-6ee0-4a76-ae5f-c4b27521125d-kube-api-access-k88hq\") pod \"mysqld-exporter-openstack-db-create-bwdh4\" (UID: \"91ab28a9-6ee0-4a76-ae5f-c4b27521125d\") " pod="openstack/mysqld-exporter-openstack-db-create-bwdh4" Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.146887 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91ab28a9-6ee0-4a76-ae5f-c4b27521125d-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-bwdh4\" (UID: \"91ab28a9-6ee0-4a76-ae5f-c4b27521125d\") " pod="openstack/mysqld-exporter-openstack-db-create-bwdh4" Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.174583 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k88hq\" (UniqueName: \"kubernetes.io/projected/91ab28a9-6ee0-4a76-ae5f-c4b27521125d-kube-api-access-k88hq\") pod \"mysqld-exporter-openstack-db-create-bwdh4\" (UID: \"91ab28a9-6ee0-4a76-ae5f-c4b27521125d\") " pod="openstack/mysqld-exporter-openstack-db-create-bwdh4" Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.197067 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-c45f-account-create-pmk9q"] Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.200363 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c45f-account-create-pmk9q" Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.203592 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.208371 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-c45f-account-create-pmk9q"] Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.226372 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-bwdh4" Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.349919 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93d8b1fc-83cc-4133-a390-e8d87ee4375b-operator-scripts\") pod \"mysqld-exporter-c45f-account-create-pmk9q\" (UID: \"93d8b1fc-83cc-4133-a390-e8d87ee4375b\") " pod="openstack/mysqld-exporter-c45f-account-create-pmk9q" Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.349981 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jswnf\" (UniqueName: \"kubernetes.io/projected/93d8b1fc-83cc-4133-a390-e8d87ee4375b-kube-api-access-jswnf\") pod \"mysqld-exporter-c45f-account-create-pmk9q\" (UID: \"93d8b1fc-83cc-4133-a390-e8d87ee4375b\") " pod="openstack/mysqld-exporter-c45f-account-create-pmk9q" Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.452476 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93d8b1fc-83cc-4133-a390-e8d87ee4375b-operator-scripts\") pod \"mysqld-exporter-c45f-account-create-pmk9q\" (UID: \"93d8b1fc-83cc-4133-a390-e8d87ee4375b\") " pod="openstack/mysqld-exporter-c45f-account-create-pmk9q" Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.453357 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jswnf\" (UniqueName: \"kubernetes.io/projected/93d8b1fc-83cc-4133-a390-e8d87ee4375b-kube-api-access-jswnf\") pod \"mysqld-exporter-c45f-account-create-pmk9q\" (UID: \"93d8b1fc-83cc-4133-a390-e8d87ee4375b\") " pod="openstack/mysqld-exporter-c45f-account-create-pmk9q" Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.453295 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93d8b1fc-83cc-4133-a390-e8d87ee4375b-operator-scripts\") pod \"mysqld-exporter-c45f-account-create-pmk9q\" (UID: \"93d8b1fc-83cc-4133-a390-e8d87ee4375b\") " pod="openstack/mysqld-exporter-c45f-account-create-pmk9q" Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.470633 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jswnf\" (UniqueName: \"kubernetes.io/projected/93d8b1fc-83cc-4133-a390-e8d87ee4375b-kube-api-access-jswnf\") pod \"mysqld-exporter-c45f-account-create-pmk9q\" (UID: \"93d8b1fc-83cc-4133-a390-e8d87ee4375b\") " pod="openstack/mysqld-exporter-c45f-account-create-pmk9q" Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.541159 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c45f-account-create-pmk9q" Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.859265 4678 generic.go:334] "Generic (PLEG): container finished" podID="1d9fedfc-2539-44c3-9124-7b5c96af23da" containerID="c7deb1ace3b5f56387e12c577ede205cdeb37697a2ea5f29ecc8a9266e3b47b5" exitCode=0 Nov 24 11:36:18 crc kubenswrapper[4678]: I1124 11:36:18.859623 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4wb58" event={"ID":"1d9fedfc-2539-44c3-9124-7b5c96af23da","Type":"ContainerDied","Data":"c7deb1ace3b5f56387e12c577ede205cdeb37697a2ea5f29ecc8a9266e3b47b5"} Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.093793 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b6d66f75b-9j4v9_d91b5ecf-edd7-4914-b8d0-4dbae32548f6/console/0.log" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.094207 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.169034 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-trusted-ca-bundle\") pod \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.169202 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-serving-cert\") pod \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.169278 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-oauth-serving-cert\") pod \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.169372 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m27c\" (UniqueName: \"kubernetes.io/projected/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-kube-api-access-8m27c\") pod \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.169409 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-config\") pod \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.169443 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-service-ca\") pod \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.169500 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-oauth-config\") pod \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.170550 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-config" (OuterVolumeSpecName: "console-config") pod "d91b5ecf-edd7-4914-b8d0-4dbae32548f6" (UID: "d91b5ecf-edd7-4914-b8d0-4dbae32548f6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.170562 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d91b5ecf-edd7-4914-b8d0-4dbae32548f6" (UID: "d91b5ecf-edd7-4914-b8d0-4dbae32548f6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.170952 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-service-ca" (OuterVolumeSpecName: "service-ca") pod "d91b5ecf-edd7-4914-b8d0-4dbae32548f6" (UID: "d91b5ecf-edd7-4914-b8d0-4dbae32548f6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.170970 4678 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.171006 4678 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.171529 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d91b5ecf-edd7-4914-b8d0-4dbae32548f6" (UID: "d91b5ecf-edd7-4914-b8d0-4dbae32548f6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.209227 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-kube-api-access-8m27c" (OuterVolumeSpecName: "kube-api-access-8m27c") pod "d91b5ecf-edd7-4914-b8d0-4dbae32548f6" (UID: "d91b5ecf-edd7-4914-b8d0-4dbae32548f6"). InnerVolumeSpecName "kube-api-access-8m27c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.223004 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d91b5ecf-edd7-4914-b8d0-4dbae32548f6" (UID: "d91b5ecf-edd7-4914-b8d0-4dbae32548f6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.270813 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d91b5ecf-edd7-4914-b8d0-4dbae32548f6" (UID: "d91b5ecf-edd7-4914-b8d0-4dbae32548f6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.287284 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-oauth-config\") pod \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\" (UID: \"d91b5ecf-edd7-4914-b8d0-4dbae32548f6\") " Nov 24 11:36:19 crc kubenswrapper[4678]: W1124 11:36:19.287917 4678 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d91b5ecf-edd7-4914-b8d0-4dbae32548f6/volumes/kubernetes.io~secret/console-oauth-config Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.287933 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d91b5ecf-edd7-4914-b8d0-4dbae32548f6" (UID: "d91b5ecf-edd7-4914-b8d0-4dbae32548f6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.290083 4678 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.290108 4678 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.290118 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m27c\" (UniqueName: \"kubernetes.io/projected/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-kube-api-access-8m27c\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.290131 4678 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.290140 4678 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d91b5ecf-edd7-4914-b8d0-4dbae32548f6-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.335934 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5vx7g"] Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.812216 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8dce-account-create-k6d7z"] Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.838154 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3978-account-create-gsvfr"] Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.860523 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-bwdh4"] Nov 24 11:36:19 crc kubenswrapper[4678]: W1124 11:36:19.863827 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91ab28a9_6ee0_4a76_ae5f_c4b27521125d.slice/crio-583d9646822ddde320abb506ae31a6db63c86bb9b5c2665e5bf3df2425df5dfc WatchSource:0}: Error finding container 583d9646822ddde320abb506ae31a6db63c86bb9b5c2665e5bf3df2425df5dfc: Status 404 returned error can't find the container with id 583d9646822ddde320abb506ae31a6db63c86bb9b5c2665e5bf3df2425df5dfc Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.872859 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f794d99b-6371-445e-9bb9-74f0bdbee6bc","Type":"ContainerStarted","Data":"ecba69d197b24e5a5a5ba6a8e6b656b0cdcac99f6b91db39f8bfaa51c70ffb13"} Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.874100 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8dce-account-create-k6d7z" event={"ID":"eedffe7d-12cf-4276-b084-e121838c576d","Type":"ContainerStarted","Data":"6ddbaede39b88072cc24c6aee53d5f737d02223c20b506abbc20b829103d7f7d"} Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.875125 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3978-account-create-gsvfr" event={"ID":"700ed725-dec9-4b2c-873c-82075bbcd721","Type":"ContainerStarted","Data":"9c2ebd36c934409e117fd02a6112e18cc3b507586039ba54a996d484fa7aa589"} Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.877247 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"728e8f13-52c5-4b48-9fff-8053732311b9","Type":"ContainerStarted","Data":"8a8cf707155e80e1af5fc5b42d9d80b457334efc638ac5d7a6c2f840eb749a1b"} Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.878445 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.879570 4678 generic.go:334] "Generic (PLEG): container finished" podID="ec3b0873-a45a-4311-a6e9-8f0dc4d031b8" containerID="018cfb5aa100853e1ee9f324cf4a2b16756725fe0353c7a3c29fd43cba415000" exitCode=0 Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.879609 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5vx7g" event={"ID":"ec3b0873-a45a-4311-a6e9-8f0dc4d031b8","Type":"ContainerDied","Data":"018cfb5aa100853e1ee9f324cf4a2b16756725fe0353c7a3c29fd43cba415000"} Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.879629 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5vx7g" event={"ID":"ec3b0873-a45a-4311-a6e9-8f0dc4d031b8","Type":"ContainerStarted","Data":"05f70c00cefefa63acf41e446ad07be60c93d6db117358ea65f8487f4500d825"} Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.879889 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-g8hdr"] Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.883038 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b6d66f75b-9j4v9_d91b5ecf-edd7-4914-b8d0-4dbae32548f6/console/0.log" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.883141 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b6d66f75b-9j4v9" event={"ID":"d91b5ecf-edd7-4914-b8d0-4dbae32548f6","Type":"ContainerDied","Data":"c39a344b59f1c60ac1b1034f7967548428bd075fb6b7084bf3a73d189fa9e2e1"} Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.883172 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b6d66f75b-9j4v9" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.883182 4678 scope.go:117] "RemoveContainer" containerID="f3accefc14b1fca3e456d3e93b22c172eacc395613fb3dc30dc00b8b3764a51f" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.887256 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6","Type":"ContainerStarted","Data":"d305f097289a80687334143eb9411e020d57ca5b69dadc8b47b0fda3a754ccc7"} Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.887546 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.889600 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.914385 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=44.602690043 podStartE2EDuration="58.914367611s" podCreationTimestamp="2025-11-24 11:35:21 +0000 UTC" firstStartedPulling="2025-11-24 11:35:27.625631464 +0000 UTC m=+1138.556691103" lastFinishedPulling="2025-11-24 11:35:41.937309032 +0000 UTC m=+1152.868368671" observedRunningTime="2025-11-24 11:36:19.908847144 +0000 UTC m=+1190.839906783" watchObservedRunningTime="2025-11-24 11:36:19.914367611 +0000 UTC m=+1190.845427250" Nov 24 11:36:19 crc kubenswrapper[4678]: I1124 11:36:19.967651 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-c45f-account-create-pmk9q"] Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:19.999322 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=58.999287419 podStartE2EDuration="58.999287419s" podCreationTimestamp="2025-11-24 11:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:36:19.959180337 +0000 UTC m=+1190.890239976" watchObservedRunningTime="2025-11-24 11:36:19.999287419 +0000 UTC m=+1190.930347068" Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.082718 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b6d66f75b-9j4v9"] Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.101980 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5b6d66f75b-9j4v9"] Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.113411 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-rpz9v"] Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.113744 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" podUID="79a6831e-5782-487e-ae5c-88373fb86b78" containerName="dnsmasq-dns" containerID="cri-o://0257598a4d034508c045dd83dc56e2cd68be8c5023f542b7992dfae6f8806664" gracePeriod=10 Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.443260 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.526700 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7x5f\" (UniqueName: \"kubernetes.io/projected/1d9fedfc-2539-44c3-9124-7b5c96af23da-kube-api-access-t7x5f\") pod \"1d9fedfc-2539-44c3-9124-7b5c96af23da\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.526765 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1d9fedfc-2539-44c3-9124-7b5c96af23da-etc-swift\") pod \"1d9fedfc-2539-44c3-9124-7b5c96af23da\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.526888 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1d9fedfc-2539-44c3-9124-7b5c96af23da-ring-data-devices\") pod \"1d9fedfc-2539-44c3-9124-7b5c96af23da\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.527037 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-dispersionconf\") pod \"1d9fedfc-2539-44c3-9124-7b5c96af23da\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.527109 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-combined-ca-bundle\") pod \"1d9fedfc-2539-44c3-9124-7b5c96af23da\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.527177 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-swiftconf\") pod \"1d9fedfc-2539-44c3-9124-7b5c96af23da\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.527298 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d9fedfc-2539-44c3-9124-7b5c96af23da-scripts\") pod \"1d9fedfc-2539-44c3-9124-7b5c96af23da\" (UID: \"1d9fedfc-2539-44c3-9124-7b5c96af23da\") " Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.527770 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9fedfc-2539-44c3-9124-7b5c96af23da-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "1d9fedfc-2539-44c3-9124-7b5c96af23da" (UID: "1d9fedfc-2539-44c3-9124-7b5c96af23da"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.527904 4678 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1d9fedfc-2539-44c3-9124-7b5c96af23da-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.528363 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d9fedfc-2539-44c3-9124-7b5c96af23da-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1d9fedfc-2539-44c3-9124-7b5c96af23da" (UID: "1d9fedfc-2539-44c3-9124-7b5c96af23da"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.589909 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d9fedfc-2539-44c3-9124-7b5c96af23da-kube-api-access-t7x5f" (OuterVolumeSpecName: "kube-api-access-t7x5f") pod "1d9fedfc-2539-44c3-9124-7b5c96af23da" (UID: "1d9fedfc-2539-44c3-9124-7b5c96af23da"). InnerVolumeSpecName "kube-api-access-t7x5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.630074 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7x5f\" (UniqueName: \"kubernetes.io/projected/1d9fedfc-2539-44c3-9124-7b5c96af23da-kube-api-access-t7x5f\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.630121 4678 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1d9fedfc-2539-44c3-9124-7b5c96af23da-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.802197 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "1d9fedfc-2539-44c3-9124-7b5c96af23da" (UID: "1d9fedfc-2539-44c3-9124-7b5c96af23da"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.852615 4678 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.894978 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.907112 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g8hdr" event={"ID":"e68cf86b-0798-4155-ba4c-dfc5ef2698cc","Type":"ContainerStarted","Data":"b730a84a5754daf08512fc750759285bad9d1574e4166add4c4ccb7b73c0887f"} Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.907877 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c45f-account-create-pmk9q" event={"ID":"93d8b1fc-83cc-4133-a390-e8d87ee4375b","Type":"ContainerStarted","Data":"7ae36264be71a554aff7da662576e3ed5f3b00f2d7d1106f788371a13039c3c8"} Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.908883 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4wb58" event={"ID":"1d9fedfc-2539-44c3-9124-7b5c96af23da","Type":"ContainerDied","Data":"885b2709ecb47e2958b94f6218613ea3596faddf1413de53e43c9e1863619f55"} Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.908907 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="885b2709ecb47e2958b94f6218613ea3596faddf1413de53e43c9e1863619f55" Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.908952 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4wb58" Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.914343 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8dce-account-create-k6d7z" event={"ID":"eedffe7d-12cf-4276-b084-e121838c576d","Type":"ContainerStarted","Data":"d28f75cf0874e3d9ad4b9406bc6176a70c3f1de74f08e638aa0b0002497e738f"} Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.927126 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d9fedfc-2539-44c3-9124-7b5c96af23da" (UID: "1d9fedfc-2539-44c3-9124-7b5c96af23da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.932250 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3978-account-create-gsvfr" event={"ID":"700ed725-dec9-4b2c-873c-82075bbcd721","Type":"ContainerStarted","Data":"3e33b21d65efdd6fdf91f72f702d551dc57224d9d07f38463dd019d1af4aca53"} Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.963442 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs8dk\" (UniqueName: \"kubernetes.io/projected/79a6831e-5782-487e-ae5c-88373fb86b78-kube-api-access-rs8dk\") pod \"79a6831e-5782-487e-ae5c-88373fb86b78\" (UID: \"79a6831e-5782-487e-ae5c-88373fb86b78\") " Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.963587 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79a6831e-5782-487e-ae5c-88373fb86b78-dns-svc\") pod \"79a6831e-5782-487e-ae5c-88373fb86b78\" (UID: \"79a6831e-5782-487e-ae5c-88373fb86b78\") " Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.963694 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79a6831e-5782-487e-ae5c-88373fb86b78-config\") pod \"79a6831e-5782-487e-ae5c-88373fb86b78\" (UID: \"79a6831e-5782-487e-ae5c-88373fb86b78\") " Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.972052 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.983401 4678 generic.go:334] "Generic (PLEG): container finished" podID="79a6831e-5782-487e-ae5c-88373fb86b78" containerID="0257598a4d034508c045dd83dc56e2cd68be8c5023f542b7992dfae6f8806664" exitCode=0 Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.983523 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" event={"ID":"79a6831e-5782-487e-ae5c-88373fb86b78","Type":"ContainerDied","Data":"0257598a4d034508c045dd83dc56e2cd68be8c5023f542b7992dfae6f8806664"} Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.983557 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" event={"ID":"79a6831e-5782-487e-ae5c-88373fb86b78","Type":"ContainerDied","Data":"f74e74153e50bb24b73565faf82e08a0da19526f616bb4a970bf7ea9a6a6b967"} Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.983588 4678 scope.go:117] "RemoveContainer" containerID="0257598a4d034508c045dd83dc56e2cd68be8c5023f542b7992dfae6f8806664" Nov 24 11:36:20 crc kubenswrapper[4678]: I1124 11:36:20.983917 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-rpz9v" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.002798 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-bwdh4" event={"ID":"91ab28a9-6ee0-4a76-ae5f-c4b27521125d","Type":"ContainerStarted","Data":"583d9646822ddde320abb506ae31a6db63c86bb9b5c2665e5bf3df2425df5dfc"} Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.051123 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-8dce-account-create-k6d7z" podStartSLOduration=6.051098093 podStartE2EDuration="6.051098093s" podCreationTimestamp="2025-11-24 11:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:36:20.979308593 +0000 UTC m=+1191.910368242" watchObservedRunningTime="2025-11-24 11:36:21.051098093 +0000 UTC m=+1191.982157722" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.109628 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-3978-account-create-gsvfr" podStartSLOduration=5.109609142 podStartE2EDuration="5.109609142s" podCreationTimestamp="2025-11-24 11:36:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:36:21.092052867 +0000 UTC m=+1192.023112506" watchObservedRunningTime="2025-11-24 11:36:21.109609142 +0000 UTC m=+1192.040668781" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.117455 4678 scope.go:117] "RemoveContainer" containerID="734612a717bc2b775cbc4364c3cdfbfac8f45de4331b2d96018671b228bd3ee3" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.204870 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79a6831e-5782-487e-ae5c-88373fb86b78-kube-api-access-rs8dk" (OuterVolumeSpecName: "kube-api-access-rs8dk") pod "79a6831e-5782-487e-ae5c-88373fb86b78" (UID: "79a6831e-5782-487e-ae5c-88373fb86b78"). InnerVolumeSpecName "kube-api-access-rs8dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.239158 4678 scope.go:117] "RemoveContainer" containerID="0257598a4d034508c045dd83dc56e2cd68be8c5023f542b7992dfae6f8806664" Nov 24 11:36:21 crc kubenswrapper[4678]: E1124 11:36:21.239453 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0257598a4d034508c045dd83dc56e2cd68be8c5023f542b7992dfae6f8806664\": container with ID starting with 0257598a4d034508c045dd83dc56e2cd68be8c5023f542b7992dfae6f8806664 not found: ID does not exist" containerID="0257598a4d034508c045dd83dc56e2cd68be8c5023f542b7992dfae6f8806664" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.239477 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0257598a4d034508c045dd83dc56e2cd68be8c5023f542b7992dfae6f8806664"} err="failed to get container status \"0257598a4d034508c045dd83dc56e2cd68be8c5023f542b7992dfae6f8806664\": rpc error: code = NotFound desc = could not find container \"0257598a4d034508c045dd83dc56e2cd68be8c5023f542b7992dfae6f8806664\": container with ID starting with 0257598a4d034508c045dd83dc56e2cd68be8c5023f542b7992dfae6f8806664 not found: ID does not exist" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.239497 4678 scope.go:117] "RemoveContainer" containerID="734612a717bc2b775cbc4364c3cdfbfac8f45de4331b2d96018671b228bd3ee3" Nov 24 11:36:21 crc kubenswrapper[4678]: E1124 11:36:21.239737 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"734612a717bc2b775cbc4364c3cdfbfac8f45de4331b2d96018671b228bd3ee3\": container with ID starting with 734612a717bc2b775cbc4364c3cdfbfac8f45de4331b2d96018671b228bd3ee3 not found: ID does not exist" containerID="734612a717bc2b775cbc4364c3cdfbfac8f45de4331b2d96018671b228bd3ee3" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.239756 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"734612a717bc2b775cbc4364c3cdfbfac8f45de4331b2d96018671b228bd3ee3"} err="failed to get container status \"734612a717bc2b775cbc4364c3cdfbfac8f45de4331b2d96018671b228bd3ee3\": rpc error: code = NotFound desc = could not find container \"734612a717bc2b775cbc4364c3cdfbfac8f45de4331b2d96018671b228bd3ee3\": container with ID starting with 734612a717bc2b775cbc4364c3cdfbfac8f45de4331b2d96018671b228bd3ee3 not found: ID does not exist" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.288220 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs8dk\" (UniqueName: \"kubernetes.io/projected/79a6831e-5782-487e-ae5c-88373fb86b78-kube-api-access-rs8dk\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.307224 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-8qp75"] Nov 24 11:36:21 crc kubenswrapper[4678]: E1124 11:36:21.307694 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d91b5ecf-edd7-4914-b8d0-4dbae32548f6" containerName="console" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.307713 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="d91b5ecf-edd7-4914-b8d0-4dbae32548f6" containerName="console" Nov 24 11:36:21 crc kubenswrapper[4678]: E1124 11:36:21.307730 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9fedfc-2539-44c3-9124-7b5c96af23da" containerName="swift-ring-rebalance" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.307738 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9fedfc-2539-44c3-9124-7b5c96af23da" containerName="swift-ring-rebalance" Nov 24 11:36:21 crc kubenswrapper[4678]: E1124 11:36:21.307768 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79a6831e-5782-487e-ae5c-88373fb86b78" containerName="init" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.307773 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="79a6831e-5782-487e-ae5c-88373fb86b78" containerName="init" Nov 24 11:36:21 crc kubenswrapper[4678]: E1124 11:36:21.308271 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79a6831e-5782-487e-ae5c-88373fb86b78" containerName="dnsmasq-dns" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.308285 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="79a6831e-5782-487e-ae5c-88373fb86b78" containerName="dnsmasq-dns" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.308512 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d9fedfc-2539-44c3-9124-7b5c96af23da" containerName="swift-ring-rebalance" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.308533 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="d91b5ecf-edd7-4914-b8d0-4dbae32548f6" containerName="console" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.308546 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="79a6831e-5782-487e-ae5c-88373fb86b78" containerName="dnsmasq-dns" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.309275 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8qp75" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.310169 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "1d9fedfc-2539-44c3-9124-7b5c96af23da" (UID: "1d9fedfc-2539-44c3-9124-7b5c96af23da"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.319207 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8qp75"] Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.382251 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9fedfc-2539-44c3-9124-7b5c96af23da-scripts" (OuterVolumeSpecName: "scripts") pod "1d9fedfc-2539-44c3-9124-7b5c96af23da" (UID: "1d9fedfc-2539-44c3-9124-7b5c96af23da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.397013 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb84b0f1-427a-4440-bfcc-cc3d7e933496-operator-scripts\") pod \"glance-db-create-8qp75\" (UID: \"bb84b0f1-427a-4440-bfcc-cc3d7e933496\") " pod="openstack/glance-db-create-8qp75" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.397092 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkdj6\" (UniqueName: \"kubernetes.io/projected/bb84b0f1-427a-4440-bfcc-cc3d7e933496-kube-api-access-mkdj6\") pod \"glance-db-create-8qp75\" (UID: \"bb84b0f1-427a-4440-bfcc-cc3d7e933496\") " pod="openstack/glance-db-create-8qp75" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.397214 4678 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1d9fedfc-2539-44c3-9124-7b5c96af23da-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.397227 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d9fedfc-2539-44c3-9124-7b5c96af23da-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.418363 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1c56-account-create-jk4zk"] Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.421479 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1c56-account-create-jk4zk" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.426605 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.435721 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1c56-account-create-jk4zk"] Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.470823 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79a6831e-5782-487e-ae5c-88373fb86b78-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79a6831e-5782-487e-ae5c-88373fb86b78" (UID: "79a6831e-5782-487e-ae5c-88373fb86b78"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.498777 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb84b0f1-427a-4440-bfcc-cc3d7e933496-operator-scripts\") pod \"glance-db-create-8qp75\" (UID: \"bb84b0f1-427a-4440-bfcc-cc3d7e933496\") " pod="openstack/glance-db-create-8qp75" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.498885 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkdj6\" (UniqueName: \"kubernetes.io/projected/bb84b0f1-427a-4440-bfcc-cc3d7e933496-kube-api-access-mkdj6\") pod \"glance-db-create-8qp75\" (UID: \"bb84b0f1-427a-4440-bfcc-cc3d7e933496\") " pod="openstack/glance-db-create-8qp75" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.499100 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79a6831e-5782-487e-ae5c-88373fb86b78-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.500274 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb84b0f1-427a-4440-bfcc-cc3d7e933496-operator-scripts\") pod \"glance-db-create-8qp75\" (UID: \"bb84b0f1-427a-4440-bfcc-cc3d7e933496\") " pod="openstack/glance-db-create-8qp75" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.522202 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkdj6\" (UniqueName: \"kubernetes.io/projected/bb84b0f1-427a-4440-bfcc-cc3d7e933496-kube-api-access-mkdj6\") pod \"glance-db-create-8qp75\" (UID: \"bb84b0f1-427a-4440-bfcc-cc3d7e933496\") " pod="openstack/glance-db-create-8qp75" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.600499 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j798r\" (UniqueName: \"kubernetes.io/projected/829c1e90-ba5e-4c4f-9b18-0bd8144c1e92-kube-api-access-j798r\") pod \"glance-1c56-account-create-jk4zk\" (UID: \"829c1e90-ba5e-4c4f-9b18-0bd8144c1e92\") " pod="openstack/glance-1c56-account-create-jk4zk" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.601182 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/829c1e90-ba5e-4c4f-9b18-0bd8144c1e92-operator-scripts\") pod \"glance-1c56-account-create-jk4zk\" (UID: \"829c1e90-ba5e-4c4f-9b18-0bd8144c1e92\") " pod="openstack/glance-1c56-account-create-jk4zk" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.605378 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79a6831e-5782-487e-ae5c-88373fb86b78-config" (OuterVolumeSpecName: "config") pod "79a6831e-5782-487e-ae5c-88373fb86b78" (UID: "79a6831e-5782-487e-ae5c-88373fb86b78"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.702960 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j798r\" (UniqueName: \"kubernetes.io/projected/829c1e90-ba5e-4c4f-9b18-0bd8144c1e92-kube-api-access-j798r\") pod \"glance-1c56-account-create-jk4zk\" (UID: \"829c1e90-ba5e-4c4f-9b18-0bd8144c1e92\") " pod="openstack/glance-1c56-account-create-jk4zk" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.703105 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/829c1e90-ba5e-4c4f-9b18-0bd8144c1e92-operator-scripts\") pod \"glance-1c56-account-create-jk4zk\" (UID: \"829c1e90-ba5e-4c4f-9b18-0bd8144c1e92\") " pod="openstack/glance-1c56-account-create-jk4zk" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.703205 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79a6831e-5782-487e-ae5c-88373fb86b78-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.704120 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/829c1e90-ba5e-4c4f-9b18-0bd8144c1e92-operator-scripts\") pod \"glance-1c56-account-create-jk4zk\" (UID: \"829c1e90-ba5e-4c4f-9b18-0bd8144c1e92\") " pod="openstack/glance-1c56-account-create-jk4zk" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.777545 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j798r\" (UniqueName: \"kubernetes.io/projected/829c1e90-ba5e-4c4f-9b18-0bd8144c1e92-kube-api-access-j798r\") pod \"glance-1c56-account-create-jk4zk\" (UID: \"829c1e90-ba5e-4c4f-9b18-0bd8144c1e92\") " pod="openstack/glance-1c56-account-create-jk4zk" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.836901 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8qp75" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.849049 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1c56-account-create-jk4zk" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.852520 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5vx7g" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.929168 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d91b5ecf-edd7-4914-b8d0-4dbae32548f6" path="/var/lib/kubelet/pods/d91b5ecf-edd7-4914-b8d0-4dbae32548f6/volumes" Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.945727 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-rpz9v"] Nov 24 11:36:21 crc kubenswrapper[4678]: I1124 11:36:21.968331 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-rpz9v"] Nov 24 11:36:22 crc kubenswrapper[4678]: I1124 11:36:22.008833 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec3b0873-a45a-4311-a6e9-8f0dc4d031b8-operator-scripts\") pod \"ec3b0873-a45a-4311-a6e9-8f0dc4d031b8\" (UID: \"ec3b0873-a45a-4311-a6e9-8f0dc4d031b8\") " Nov 24 11:36:22 crc kubenswrapper[4678]: I1124 11:36:22.009141 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnxdc\" (UniqueName: \"kubernetes.io/projected/ec3b0873-a45a-4311-a6e9-8f0dc4d031b8-kube-api-access-gnxdc\") pod \"ec3b0873-a45a-4311-a6e9-8f0dc4d031b8\" (UID: \"ec3b0873-a45a-4311-a6e9-8f0dc4d031b8\") " Nov 24 11:36:22 crc kubenswrapper[4678]: I1124 11:36:22.010371 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec3b0873-a45a-4311-a6e9-8f0dc4d031b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec3b0873-a45a-4311-a6e9-8f0dc4d031b8" (UID: "ec3b0873-a45a-4311-a6e9-8f0dc4d031b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:22 crc kubenswrapper[4678]: I1124 11:36:22.021438 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3b0873-a45a-4311-a6e9-8f0dc4d031b8-kube-api-access-gnxdc" (OuterVolumeSpecName: "kube-api-access-gnxdc") pod "ec3b0873-a45a-4311-a6e9-8f0dc4d031b8" (UID: "ec3b0873-a45a-4311-a6e9-8f0dc4d031b8"). InnerVolumeSpecName "kube-api-access-gnxdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:22 crc kubenswrapper[4678]: I1124 11:36:22.032205 4678 generic.go:334] "Generic (PLEG): container finished" podID="eedffe7d-12cf-4276-b084-e121838c576d" containerID="d28f75cf0874e3d9ad4b9406bc6176a70c3f1de74f08e638aa0b0002497e738f" exitCode=0 Nov 24 11:36:22 crc kubenswrapper[4678]: I1124 11:36:22.032276 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8dce-account-create-k6d7z" event={"ID":"eedffe7d-12cf-4276-b084-e121838c576d","Type":"ContainerDied","Data":"d28f75cf0874e3d9ad4b9406bc6176a70c3f1de74f08e638aa0b0002497e738f"} Nov 24 11:36:22 crc kubenswrapper[4678]: I1124 11:36:22.045867 4678 generic.go:334] "Generic (PLEG): container finished" podID="700ed725-dec9-4b2c-873c-82075bbcd721" containerID="3e33b21d65efdd6fdf91f72f702d551dc57224d9d07f38463dd019d1af4aca53" exitCode=0 Nov 24 11:36:22 crc kubenswrapper[4678]: I1124 11:36:22.046136 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3978-account-create-gsvfr" event={"ID":"700ed725-dec9-4b2c-873c-82075bbcd721","Type":"ContainerDied","Data":"3e33b21d65efdd6fdf91f72f702d551dc57224d9d07f38463dd019d1af4aca53"} Nov 24 11:36:22 crc kubenswrapper[4678]: I1124 11:36:22.054701 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5vx7g" event={"ID":"ec3b0873-a45a-4311-a6e9-8f0dc4d031b8","Type":"ContainerDied","Data":"05f70c00cefefa63acf41e446ad07be60c93d6db117358ea65f8487f4500d825"} Nov 24 11:36:22 crc kubenswrapper[4678]: I1124 11:36:22.054736 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05f70c00cefefa63acf41e446ad07be60c93d6db117358ea65f8487f4500d825" Nov 24 11:36:22 crc kubenswrapper[4678]: I1124 11:36:22.054791 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5vx7g" Nov 24 11:36:22 crc kubenswrapper[4678]: I1124 11:36:22.111564 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnxdc\" (UniqueName: \"kubernetes.io/projected/ec3b0873-a45a-4311-a6e9-8f0dc4d031b8-kube-api-access-gnxdc\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:22 crc kubenswrapper[4678]: I1124 11:36:22.111626 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec3b0873-a45a-4311-a6e9-8f0dc4d031b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:22 crc kubenswrapper[4678]: I1124 11:36:22.412717 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1c56-account-create-jk4zk"] Nov 24 11:36:22 crc kubenswrapper[4678]: I1124 11:36:22.439172 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8qp75"] Nov 24 11:36:22 crc kubenswrapper[4678]: W1124 11:36:22.684295 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod829c1e90_ba5e_4c4f_9b18_0bd8144c1e92.slice/crio-f191e07a290164024ea3e3442ac4a1bfb61bc91b084c56663b792a176ee8872f WatchSource:0}: Error finding container f191e07a290164024ea3e3442ac4a1bfb61bc91b084c56663b792a176ee8872f: Status 404 returned error can't find the container with id f191e07a290164024ea3e3442ac4a1bfb61bc91b084c56663b792a176ee8872f Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.065469 4678 generic.go:334] "Generic (PLEG): container finished" podID="91ab28a9-6ee0-4a76-ae5f-c4b27521125d" containerID="68ce9dba487c9232f62a604f038bf8bb17c7d6e223a7161c08731530a8f86eab" exitCode=0 Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.065853 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-bwdh4" event={"ID":"91ab28a9-6ee0-4a76-ae5f-c4b27521125d","Type":"ContainerDied","Data":"68ce9dba487c9232f62a604f038bf8bb17c7d6e223a7161c08731530a8f86eab"} Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.077404 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8qp75" event={"ID":"bb84b0f1-427a-4440-bfcc-cc3d7e933496","Type":"ContainerStarted","Data":"c51c0a176d1eb606671a8575b2ee81fae466e997c3468208d9eb7a197774d59c"} Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.077466 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8qp75" event={"ID":"bb84b0f1-427a-4440-bfcc-cc3d7e933496","Type":"ContainerStarted","Data":"6373bed64b6e198c7fbce777418d2062572c86965805408f59d7868514fa08a4"} Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.093147 4678 generic.go:334] "Generic (PLEG): container finished" podID="e68cf86b-0798-4155-ba4c-dfc5ef2698cc" containerID="dd2edd04b534fd5e1e7bf5339ebb4ba8c9ead3c3d07fe21966654934f83c6bb7" exitCode=0 Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.093260 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g8hdr" event={"ID":"e68cf86b-0798-4155-ba4c-dfc5ef2698cc","Type":"ContainerDied","Data":"dd2edd04b534fd5e1e7bf5339ebb4ba8c9ead3c3d07fe21966654934f83c6bb7"} Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.095180 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1c56-account-create-jk4zk" event={"ID":"829c1e90-ba5e-4c4f-9b18-0bd8144c1e92","Type":"ContainerStarted","Data":"2c3ad5e32603b8f2c80538ad98ce689ae2bb486d85cd5324eb31a6db24139c4e"} Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.095207 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1c56-account-create-jk4zk" event={"ID":"829c1e90-ba5e-4c4f-9b18-0bd8144c1e92","Type":"ContainerStarted","Data":"f191e07a290164024ea3e3442ac4a1bfb61bc91b084c56663b792a176ee8872f"} Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.110812 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-8qp75" podStartSLOduration=2.110788469 podStartE2EDuration="2.110788469s" podCreationTimestamp="2025-11-24 11:36:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:36:23.102394527 +0000 UTC m=+1194.033454186" watchObservedRunningTime="2025-11-24 11:36:23.110788469 +0000 UTC m=+1194.041848108" Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.111356 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c45f-account-create-pmk9q" event={"ID":"93d8b1fc-83cc-4133-a390-e8d87ee4375b","Type":"ContainerStarted","Data":"88daf149dc9bea59ebd135dc4a493f8b227343e982fdd9f984839ab671e5ada1"} Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.184423 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-1c56-account-create-jk4zk" podStartSLOduration=2.184404228 podStartE2EDuration="2.184404228s" podCreationTimestamp="2025-11-24 11:36:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:36:23.165444866 +0000 UTC m=+1194.096504505" watchObservedRunningTime="2025-11-24 11:36:23.184404228 +0000 UTC m=+1194.115463867" Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.734548 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8dce-account-create-k6d7z" Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.863281 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eedffe7d-12cf-4276-b084-e121838c576d-operator-scripts\") pod \"eedffe7d-12cf-4276-b084-e121838c576d\" (UID: \"eedffe7d-12cf-4276-b084-e121838c576d\") " Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.863511 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shs2q\" (UniqueName: \"kubernetes.io/projected/eedffe7d-12cf-4276-b084-e121838c576d-kube-api-access-shs2q\") pod \"eedffe7d-12cf-4276-b084-e121838c576d\" (UID: \"eedffe7d-12cf-4276-b084-e121838c576d\") " Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.863759 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eedffe7d-12cf-4276-b084-e121838c576d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eedffe7d-12cf-4276-b084-e121838c576d" (UID: "eedffe7d-12cf-4276-b084-e121838c576d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.864362 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eedffe7d-12cf-4276-b084-e121838c576d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.872642 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eedffe7d-12cf-4276-b084-e121838c576d-kube-api-access-shs2q" (OuterVolumeSpecName: "kube-api-access-shs2q") pod "eedffe7d-12cf-4276-b084-e121838c576d" (UID: "eedffe7d-12cf-4276-b084-e121838c576d"). InnerVolumeSpecName "kube-api-access-shs2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.910614 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79a6831e-5782-487e-ae5c-88373fb86b78" path="/var/lib/kubelet/pods/79a6831e-5782-487e-ae5c-88373fb86b78/volumes" Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.953635 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3978-account-create-gsvfr" Nov 24 11:36:23 crc kubenswrapper[4678]: I1124 11:36:23.966480 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shs2q\" (UniqueName: \"kubernetes.io/projected/eedffe7d-12cf-4276-b084-e121838c576d-kube-api-access-shs2q\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.068177 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/700ed725-dec9-4b2c-873c-82075bbcd721-operator-scripts\") pod \"700ed725-dec9-4b2c-873c-82075bbcd721\" (UID: \"700ed725-dec9-4b2c-873c-82075bbcd721\") " Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.068492 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4wkz\" (UniqueName: \"kubernetes.io/projected/700ed725-dec9-4b2c-873c-82075bbcd721-kube-api-access-z4wkz\") pod \"700ed725-dec9-4b2c-873c-82075bbcd721\" (UID: \"700ed725-dec9-4b2c-873c-82075bbcd721\") " Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.071080 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/700ed725-dec9-4b2c-873c-82075bbcd721-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "700ed725-dec9-4b2c-873c-82075bbcd721" (UID: "700ed725-dec9-4b2c-873c-82075bbcd721"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.093966 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/700ed725-dec9-4b2c-873c-82075bbcd721-kube-api-access-z4wkz" (OuterVolumeSpecName: "kube-api-access-z4wkz") pod "700ed725-dec9-4b2c-873c-82075bbcd721" (UID: "700ed725-dec9-4b2c-873c-82075bbcd721"). InnerVolumeSpecName "kube-api-access-z4wkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.123900 4678 generic.go:334] "Generic (PLEG): container finished" podID="829c1e90-ba5e-4c4f-9b18-0bd8144c1e92" containerID="2c3ad5e32603b8f2c80538ad98ce689ae2bb486d85cd5324eb31a6db24139c4e" exitCode=0 Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.124005 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1c56-account-create-jk4zk" event={"ID":"829c1e90-ba5e-4c4f-9b18-0bd8144c1e92","Type":"ContainerDied","Data":"2c3ad5e32603b8f2c80538ad98ce689ae2bb486d85cd5324eb31a6db24139c4e"} Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.126394 4678 generic.go:334] "Generic (PLEG): container finished" podID="93d8b1fc-83cc-4133-a390-e8d87ee4375b" containerID="88daf149dc9bea59ebd135dc4a493f8b227343e982fdd9f984839ab671e5ada1" exitCode=0 Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.126481 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c45f-account-create-pmk9q" event={"ID":"93d8b1fc-83cc-4133-a390-e8d87ee4375b","Type":"ContainerDied","Data":"88daf149dc9bea59ebd135dc4a493f8b227343e982fdd9f984839ab671e5ada1"} Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.130464 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f794d99b-6371-445e-9bb9-74f0bdbee6bc","Type":"ContainerStarted","Data":"7db6ea9c92fe32e4e7c04e09ce85b1655c4346c3668938e461a58b83357e5232"} Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.132735 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8dce-account-create-k6d7z" Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.133975 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8dce-account-create-k6d7z" event={"ID":"eedffe7d-12cf-4276-b084-e121838c576d","Type":"ContainerDied","Data":"6ddbaede39b88072cc24c6aee53d5f737d02223c20b506abbc20b829103d7f7d"} Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.134019 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ddbaede39b88072cc24c6aee53d5f737d02223c20b506abbc20b829103d7f7d" Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.137423 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3978-account-create-gsvfr" event={"ID":"700ed725-dec9-4b2c-873c-82075bbcd721","Type":"ContainerDied","Data":"9c2ebd36c934409e117fd02a6112e18cc3b507586039ba54a996d484fa7aa589"} Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.137455 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c2ebd36c934409e117fd02a6112e18cc3b507586039ba54a996d484fa7aa589" Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.137520 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3978-account-create-gsvfr" Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.148923 4678 generic.go:334] "Generic (PLEG): container finished" podID="bb84b0f1-427a-4440-bfcc-cc3d7e933496" containerID="c51c0a176d1eb606671a8575b2ee81fae466e997c3468208d9eb7a197774d59c" exitCode=0 Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.148990 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8qp75" event={"ID":"bb84b0f1-427a-4440-bfcc-cc3d7e933496","Type":"ContainerDied","Data":"c51c0a176d1eb606671a8575b2ee81fae466e997c3468208d9eb7a197774d59c"} Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.172244 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/700ed725-dec9-4b2c-873c-82075bbcd721-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.172284 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4wkz\" (UniqueName: \"kubernetes.io/projected/700ed725-dec9-4b2c-873c-82075bbcd721-kube-api-access-z4wkz\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:24 crc kubenswrapper[4678]: I1124 11:36:24.530155 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c45f-account-create-pmk9q" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.681189 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jswnf\" (UniqueName: \"kubernetes.io/projected/93d8b1fc-83cc-4133-a390-e8d87ee4375b-kube-api-access-jswnf\") pod \"93d8b1fc-83cc-4133-a390-e8d87ee4375b\" (UID: \"93d8b1fc-83cc-4133-a390-e8d87ee4375b\") " Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.681243 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93d8b1fc-83cc-4133-a390-e8d87ee4375b-operator-scripts\") pod \"93d8b1fc-83cc-4133-a390-e8d87ee4375b\" (UID: \"93d8b1fc-83cc-4133-a390-e8d87ee4375b\") " Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.683426 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93d8b1fc-83cc-4133-a390-e8d87ee4375b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93d8b1fc-83cc-4133-a390-e8d87ee4375b" (UID: "93d8b1fc-83cc-4133-a390-e8d87ee4375b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.687376 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d8b1fc-83cc-4133-a390-e8d87ee4375b-kube-api-access-jswnf" (OuterVolumeSpecName: "kube-api-access-jswnf") pod "93d8b1fc-83cc-4133-a390-e8d87ee4375b" (UID: "93d8b1fc-83cc-4133-a390-e8d87ee4375b"). InnerVolumeSpecName "kube-api-access-jswnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.783766 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jswnf\" (UniqueName: \"kubernetes.io/projected/93d8b1fc-83cc-4133-a390-e8d87ee4375b-kube-api-access-jswnf\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.783788 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93d8b1fc-83cc-4133-a390-e8d87ee4375b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.801093 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g8hdr" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.816190 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-bwdh4" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.987251 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k88hq\" (UniqueName: \"kubernetes.io/projected/91ab28a9-6ee0-4a76-ae5f-c4b27521125d-kube-api-access-k88hq\") pod \"91ab28a9-6ee0-4a76-ae5f-c4b27521125d\" (UID: \"91ab28a9-6ee0-4a76-ae5f-c4b27521125d\") " Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.988309 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgtrx\" (UniqueName: \"kubernetes.io/projected/e68cf86b-0798-4155-ba4c-dfc5ef2698cc-kube-api-access-sgtrx\") pod \"e68cf86b-0798-4155-ba4c-dfc5ef2698cc\" (UID: \"e68cf86b-0798-4155-ba4c-dfc5ef2698cc\") " Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.988385 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e68cf86b-0798-4155-ba4c-dfc5ef2698cc-operator-scripts\") pod \"e68cf86b-0798-4155-ba4c-dfc5ef2698cc\" (UID: \"e68cf86b-0798-4155-ba4c-dfc5ef2698cc\") " Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.988578 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91ab28a9-6ee0-4a76-ae5f-c4b27521125d-operator-scripts\") pod \"91ab28a9-6ee0-4a76-ae5f-c4b27521125d\" (UID: \"91ab28a9-6ee0-4a76-ae5f-c4b27521125d\") " Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.988995 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e68cf86b-0798-4155-ba4c-dfc5ef2698cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e68cf86b-0798-4155-ba4c-dfc5ef2698cc" (UID: "e68cf86b-0798-4155-ba4c-dfc5ef2698cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.989114 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ab28a9-6ee0-4a76-ae5f-c4b27521125d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91ab28a9-6ee0-4a76-ae5f-c4b27521125d" (UID: "91ab28a9-6ee0-4a76-ae5f-c4b27521125d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.990853 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e68cf86b-0798-4155-ba4c-dfc5ef2698cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.990868 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91ab28a9-6ee0-4a76-ae5f-c4b27521125d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.993258 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68cf86b-0798-4155-ba4c-dfc5ef2698cc-kube-api-access-sgtrx" (OuterVolumeSpecName: "kube-api-access-sgtrx") pod "e68cf86b-0798-4155-ba4c-dfc5ef2698cc" (UID: "e68cf86b-0798-4155-ba4c-dfc5ef2698cc"). InnerVolumeSpecName "kube-api-access-sgtrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:24.993418 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ab28a9-6ee0-4a76-ae5f-c4b27521125d-kube-api-access-k88hq" (OuterVolumeSpecName: "kube-api-access-k88hq") pod "91ab28a9-6ee0-4a76-ae5f-c4b27521125d" (UID: "91ab28a9-6ee0-4a76-ae5f-c4b27521125d"). InnerVolumeSpecName "kube-api-access-k88hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:25.093961 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k88hq\" (UniqueName: \"kubernetes.io/projected/91ab28a9-6ee0-4a76-ae5f-c4b27521125d-kube-api-access-k88hq\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:25.094056 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgtrx\" (UniqueName: \"kubernetes.io/projected/e68cf86b-0798-4155-ba4c-dfc5ef2698cc-kube-api-access-sgtrx\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:25.163103 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-bwdh4" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:25.163597 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-bwdh4" event={"ID":"91ab28a9-6ee0-4a76-ae5f-c4b27521125d","Type":"ContainerDied","Data":"583d9646822ddde320abb506ae31a6db63c86bb9b5c2665e5bf3df2425df5dfc"} Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:25.163648 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="583d9646822ddde320abb506ae31a6db63c86bb9b5c2665e5bf3df2425df5dfc" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:25.166999 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g8hdr" event={"ID":"e68cf86b-0798-4155-ba4c-dfc5ef2698cc","Type":"ContainerDied","Data":"b730a84a5754daf08512fc750759285bad9d1574e4166add4c4ccb7b73c0887f"} Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:25.167028 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b730a84a5754daf08512fc750759285bad9d1574e4166add4c4ccb7b73c0887f" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:25.167039 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g8hdr" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:25.168802 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c45f-account-create-pmk9q" event={"ID":"93d8b1fc-83cc-4133-a390-e8d87ee4375b","Type":"ContainerDied","Data":"7ae36264be71a554aff7da662576e3ed5f3b00f2d7d1106f788371a13039c3c8"} Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:25.168833 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c45f-account-create-pmk9q" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:25.168857 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ae36264be71a554aff7da662576e3ed5f3b00f2d7d1106f788371a13039c3c8" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:25.400237 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:25.880230 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1c56-account-create-jk4zk" Nov 24 11:36:25 crc kubenswrapper[4678]: I1124 11:36:25.881025 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8qp75" Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.016046 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/829c1e90-ba5e-4c4f-9b18-0bd8144c1e92-operator-scripts\") pod \"829c1e90-ba5e-4c4f-9b18-0bd8144c1e92\" (UID: \"829c1e90-ba5e-4c4f-9b18-0bd8144c1e92\") " Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.016195 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb84b0f1-427a-4440-bfcc-cc3d7e933496-operator-scripts\") pod \"bb84b0f1-427a-4440-bfcc-cc3d7e933496\" (UID: \"bb84b0f1-427a-4440-bfcc-cc3d7e933496\") " Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.016289 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkdj6\" (UniqueName: \"kubernetes.io/projected/bb84b0f1-427a-4440-bfcc-cc3d7e933496-kube-api-access-mkdj6\") pod \"bb84b0f1-427a-4440-bfcc-cc3d7e933496\" (UID: \"bb84b0f1-427a-4440-bfcc-cc3d7e933496\") " Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.016365 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j798r\" (UniqueName: \"kubernetes.io/projected/829c1e90-ba5e-4c4f-9b18-0bd8144c1e92-kube-api-access-j798r\") pod \"829c1e90-ba5e-4c4f-9b18-0bd8144c1e92\" (UID: \"829c1e90-ba5e-4c4f-9b18-0bd8144c1e92\") " Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.018995 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb84b0f1-427a-4440-bfcc-cc3d7e933496-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb84b0f1-427a-4440-bfcc-cc3d7e933496" (UID: "bb84b0f1-427a-4440-bfcc-cc3d7e933496"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.021867 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/829c1e90-ba5e-4c4f-9b18-0bd8144c1e92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "829c1e90-ba5e-4c4f-9b18-0bd8144c1e92" (UID: "829c1e90-ba5e-4c4f-9b18-0bd8144c1e92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.026859 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829c1e90-ba5e-4c4f-9b18-0bd8144c1e92-kube-api-access-j798r" (OuterVolumeSpecName: "kube-api-access-j798r") pod "829c1e90-ba5e-4c4f-9b18-0bd8144c1e92" (UID: "829c1e90-ba5e-4c4f-9b18-0bd8144c1e92"). InnerVolumeSpecName "kube-api-access-j798r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.026967 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb84b0f1-427a-4440-bfcc-cc3d7e933496-kube-api-access-mkdj6" (OuterVolumeSpecName: "kube-api-access-mkdj6") pod "bb84b0f1-427a-4440-bfcc-cc3d7e933496" (UID: "bb84b0f1-427a-4440-bfcc-cc3d7e933496"). InnerVolumeSpecName "kube-api-access-mkdj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.118382 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j798r\" (UniqueName: \"kubernetes.io/projected/829c1e90-ba5e-4c4f-9b18-0bd8144c1e92-kube-api-access-j798r\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.118789 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/829c1e90-ba5e-4c4f-9b18-0bd8144c1e92-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.118799 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb84b0f1-427a-4440-bfcc-cc3d7e933496-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.118807 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkdj6\" (UniqueName: \"kubernetes.io/projected/bb84b0f1-427a-4440-bfcc-cc3d7e933496-kube-api-access-mkdj6\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.178806 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8qp75" event={"ID":"bb84b0f1-427a-4440-bfcc-cc3d7e933496","Type":"ContainerDied","Data":"6373bed64b6e198c7fbce777418d2062572c86965805408f59d7868514fa08a4"} Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.178845 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6373bed64b6e198c7fbce777418d2062572c86965805408f59d7868514fa08a4" Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.178897 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8qp75" Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.184437 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1c56-account-create-jk4zk" event={"ID":"829c1e90-ba5e-4c4f-9b18-0bd8144c1e92","Type":"ContainerDied","Data":"f191e07a290164024ea3e3442ac4a1bfb61bc91b084c56663b792a176ee8872f"} Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.184501 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f191e07a290164024ea3e3442ac4a1bfb61bc91b084c56663b792a176ee8872f" Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.184573 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1c56-account-create-jk4zk" Nov 24 11:36:26 crc kubenswrapper[4678]: I1124 11:36:26.904108 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-blf4t" podUID="de344c51-a739-44dc-b0a2-914839d40a8b" containerName="ovn-controller" probeResult="failure" output=< Nov 24 11:36:26 crc kubenswrapper[4678]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 11:36:26 crc kubenswrapper[4678]: > Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.204551 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f794d99b-6371-445e-9bb9-74f0bdbee6bc","Type":"ContainerStarted","Data":"fbc8732dfc88f205b0e9a06e1051ab0a3fcd2f8dbc3c9641723f582d65bc2537"} Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.412310 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.546576517 podStartE2EDuration="1m0.412284273s" podCreationTimestamp="2025-11-24 11:35:28 +0000 UTC" firstStartedPulling="2025-11-24 11:35:43.526850403 +0000 UTC m=+1154.457910042" lastFinishedPulling="2025-11-24 11:36:27.392558159 +0000 UTC m=+1198.323617798" observedRunningTime="2025-11-24 11:36:28.249430223 +0000 UTC m=+1199.180489872" watchObservedRunningTime="2025-11-24 11:36:28.412284273 +0000 UTC m=+1199.343343912" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.421779 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fzs56"] Nov 24 11:36:28 crc kubenswrapper[4678]: E1124 11:36:28.422445 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3b0873-a45a-4311-a6e9-8f0dc4d031b8" containerName="mariadb-database-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.422469 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3b0873-a45a-4311-a6e9-8f0dc4d031b8" containerName="mariadb-database-create" Nov 24 11:36:28 crc kubenswrapper[4678]: E1124 11:36:28.422481 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700ed725-dec9-4b2c-873c-82075bbcd721" containerName="mariadb-account-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.422489 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="700ed725-dec9-4b2c-873c-82075bbcd721" containerName="mariadb-account-create" Nov 24 11:36:28 crc kubenswrapper[4678]: E1124 11:36:28.422507 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829c1e90-ba5e-4c4f-9b18-0bd8144c1e92" containerName="mariadb-account-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.422514 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="829c1e90-ba5e-4c4f-9b18-0bd8144c1e92" containerName="mariadb-account-create" Nov 24 11:36:28 crc kubenswrapper[4678]: E1124 11:36:28.422527 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68cf86b-0798-4155-ba4c-dfc5ef2698cc" containerName="mariadb-database-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.422534 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68cf86b-0798-4155-ba4c-dfc5ef2698cc" containerName="mariadb-database-create" Nov 24 11:36:28 crc kubenswrapper[4678]: E1124 11:36:28.422549 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d8b1fc-83cc-4133-a390-e8d87ee4375b" containerName="mariadb-account-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.422557 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d8b1fc-83cc-4133-a390-e8d87ee4375b" containerName="mariadb-account-create" Nov 24 11:36:28 crc kubenswrapper[4678]: E1124 11:36:28.422581 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eedffe7d-12cf-4276-b084-e121838c576d" containerName="mariadb-account-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.422588 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="eedffe7d-12cf-4276-b084-e121838c576d" containerName="mariadb-account-create" Nov 24 11:36:28 crc kubenswrapper[4678]: E1124 11:36:28.422615 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ab28a9-6ee0-4a76-ae5f-c4b27521125d" containerName="mariadb-database-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.422622 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ab28a9-6ee0-4a76-ae5f-c4b27521125d" containerName="mariadb-database-create" Nov 24 11:36:28 crc kubenswrapper[4678]: E1124 11:36:28.422639 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb84b0f1-427a-4440-bfcc-cc3d7e933496" containerName="mariadb-database-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.422649 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb84b0f1-427a-4440-bfcc-cc3d7e933496" containerName="mariadb-database-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.422898 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68cf86b-0798-4155-ba4c-dfc5ef2698cc" containerName="mariadb-database-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.422926 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d8b1fc-83cc-4133-a390-e8d87ee4375b" containerName="mariadb-account-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.422938 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ab28a9-6ee0-4a76-ae5f-c4b27521125d" containerName="mariadb-database-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.422952 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec3b0873-a45a-4311-a6e9-8f0dc4d031b8" containerName="mariadb-database-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.422964 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb84b0f1-427a-4440-bfcc-cc3d7e933496" containerName="mariadb-database-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.422978 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="700ed725-dec9-4b2c-873c-82075bbcd721" containerName="mariadb-account-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.422990 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="eedffe7d-12cf-4276-b084-e121838c576d" containerName="mariadb-account-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.423004 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="829c1e90-ba5e-4c4f-9b18-0bd8144c1e92" containerName="mariadb-account-create" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.423949 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fzs56" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.432559 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fzs56"] Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.589487 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbmrp\" (UniqueName: \"kubernetes.io/projected/9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7-kube-api-access-fbmrp\") pod \"mysqld-exporter-openstack-cell1-db-create-fzs56\" (UID: \"9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fzs56" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.589644 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fzs56\" (UID: \"9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fzs56" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.618727 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-c5fe-account-create-759dd"] Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.620024 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c5fe-account-create-759dd" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.622043 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.641157 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-c5fe-account-create-759dd"] Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.693416 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fzs56\" (UID: \"9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fzs56" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.693777 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbmrp\" (UniqueName: \"kubernetes.io/projected/9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7-kube-api-access-fbmrp\") pod \"mysqld-exporter-openstack-cell1-db-create-fzs56\" (UID: \"9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fzs56" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.694282 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fzs56\" (UID: \"9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fzs56" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.716889 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbmrp\" (UniqueName: \"kubernetes.io/projected/9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7-kube-api-access-fbmrp\") pod \"mysqld-exporter-openstack-cell1-db-create-fzs56\" (UID: \"9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fzs56" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.742156 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fzs56" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.795858 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07773117-0d6a-4c24-a8d6-4f2f27f280d9-operator-scripts\") pod \"mysqld-exporter-c5fe-account-create-759dd\" (UID: \"07773117-0d6a-4c24-a8d6-4f2f27f280d9\") " pod="openstack/mysqld-exporter-c5fe-account-create-759dd" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.795924 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwjrg\" (UniqueName: \"kubernetes.io/projected/07773117-0d6a-4c24-a8d6-4f2f27f280d9-kube-api-access-zwjrg\") pod \"mysqld-exporter-c5fe-account-create-759dd\" (UID: \"07773117-0d6a-4c24-a8d6-4f2f27f280d9\") " pod="openstack/mysqld-exporter-c5fe-account-create-759dd" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.898321 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07773117-0d6a-4c24-a8d6-4f2f27f280d9-operator-scripts\") pod \"mysqld-exporter-c5fe-account-create-759dd\" (UID: \"07773117-0d6a-4c24-a8d6-4f2f27f280d9\") " pod="openstack/mysqld-exporter-c5fe-account-create-759dd" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.898395 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwjrg\" (UniqueName: \"kubernetes.io/projected/07773117-0d6a-4c24-a8d6-4f2f27f280d9-kube-api-access-zwjrg\") pod \"mysqld-exporter-c5fe-account-create-759dd\" (UID: \"07773117-0d6a-4c24-a8d6-4f2f27f280d9\") " pod="openstack/mysqld-exporter-c5fe-account-create-759dd" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.900149 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07773117-0d6a-4c24-a8d6-4f2f27f280d9-operator-scripts\") pod \"mysqld-exporter-c5fe-account-create-759dd\" (UID: \"07773117-0d6a-4c24-a8d6-4f2f27f280d9\") " pod="openstack/mysqld-exporter-c5fe-account-create-759dd" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.930483 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwjrg\" (UniqueName: \"kubernetes.io/projected/07773117-0d6a-4c24-a8d6-4f2f27f280d9-kube-api-access-zwjrg\") pod \"mysqld-exporter-c5fe-account-create-759dd\" (UID: \"07773117-0d6a-4c24-a8d6-4f2f27f280d9\") " pod="openstack/mysqld-exporter-c5fe-account-create-759dd" Nov 24 11:36:28 crc kubenswrapper[4678]: I1124 11:36:28.938903 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c5fe-account-create-759dd" Nov 24 11:36:29 crc kubenswrapper[4678]: I1124 11:36:29.290896 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fzs56"] Nov 24 11:36:29 crc kubenswrapper[4678]: W1124 11:36:29.297394 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c2cc500_b88a_441b_bd7b_3f3bed5dd1b7.slice/crio-8c9dd8099de2d9c148d753ca139c512a34255d5262271c32c84f63b79c2e32f2 WatchSource:0}: Error finding container 8c9dd8099de2d9c148d753ca139c512a34255d5262271c32c84f63b79c2e32f2: Status 404 returned error can't find the container with id 8c9dd8099de2d9c148d753ca139c512a34255d5262271c32c84f63b79c2e32f2 Nov 24 11:36:29 crc kubenswrapper[4678]: I1124 11:36:29.451298 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-c5fe-account-create-759dd"] Nov 24 11:36:29 crc kubenswrapper[4678]: W1124 11:36:29.451416 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07773117_0d6a_4c24_a8d6_4f2f27f280d9.slice/crio-3f607cf661d0e8faa5777c4dc95b0abdba3dde8541e56a5639aad8b2207c0110 WatchSource:0}: Error finding container 3f607cf661d0e8faa5777c4dc95b0abdba3dde8541e56a5639aad8b2207c0110: Status 404 returned error can't find the container with id 3f607cf661d0e8faa5777c4dc95b0abdba3dde8541e56a5639aad8b2207c0110 Nov 24 11:36:29 crc kubenswrapper[4678]: I1124 11:36:29.886133 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:29 crc kubenswrapper[4678]: I1124 11:36:29.886613 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:29 crc kubenswrapper[4678]: I1124 11:36:29.889611 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:30 crc kubenswrapper[4678]: I1124 11:36:30.224703 4678 generic.go:334] "Generic (PLEG): container finished" podID="9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7" containerID="2f58d8570f99884b73bf01b741429fa315cea40f4309d3c345c021362ad654e6" exitCode=0 Nov 24 11:36:30 crc kubenswrapper[4678]: I1124 11:36:30.224777 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fzs56" event={"ID":"9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7","Type":"ContainerDied","Data":"2f58d8570f99884b73bf01b741429fa315cea40f4309d3c345c021362ad654e6"} Nov 24 11:36:30 crc kubenswrapper[4678]: I1124 11:36:30.225299 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fzs56" event={"ID":"9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7","Type":"ContainerStarted","Data":"8c9dd8099de2d9c148d753ca139c512a34255d5262271c32c84f63b79c2e32f2"} Nov 24 11:36:30 crc kubenswrapper[4678]: I1124 11:36:30.226605 4678 generic.go:334] "Generic (PLEG): container finished" podID="07773117-0d6a-4c24-a8d6-4f2f27f280d9" containerID="db79d0acb9b08906621f939f8f093d9239e80208b395b649a5ea5f9e723d7485" exitCode=0 Nov 24 11:36:30 crc kubenswrapper[4678]: I1124 11:36:30.226782 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c5fe-account-create-759dd" event={"ID":"07773117-0d6a-4c24-a8d6-4f2f27f280d9","Type":"ContainerDied","Data":"db79d0acb9b08906621f939f8f093d9239e80208b395b649a5ea5f9e723d7485"} Nov 24 11:36:30 crc kubenswrapper[4678]: I1124 11:36:30.226847 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c5fe-account-create-759dd" event={"ID":"07773117-0d6a-4c24-a8d6-4f2f27f280d9","Type":"ContainerStarted","Data":"3f607cf661d0e8faa5777c4dc95b0abdba3dde8541e56a5639aad8b2207c0110"} Nov 24 11:36:30 crc kubenswrapper[4678]: I1124 11:36:30.228528 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.509602 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-ch9vg"] Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.511623 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ch9vg" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.514837 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.514842 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hr99s" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.532819 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ch9vg"] Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.668184 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-db-sync-config-data\") pod \"glance-db-sync-ch9vg\" (UID: \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\") " pod="openstack/glance-db-sync-ch9vg" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.668602 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-config-data\") pod \"glance-db-sync-ch9vg\" (UID: \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\") " pod="openstack/glance-db-sync-ch9vg" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.668640 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.668714 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-combined-ca-bundle\") pod \"glance-db-sync-ch9vg\" (UID: \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\") " pod="openstack/glance-db-sync-ch9vg" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.668734 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq72w\" (UniqueName: \"kubernetes.io/projected/3c6005a5-db1b-49b6-87ce-c507e10a6d21-kube-api-access-sq72w\") pod \"glance-db-sync-ch9vg\" (UID: \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\") " pod="openstack/glance-db-sync-ch9vg" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.677507 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1a7a4a62-9baa-4df8-ba83-688dc6817249-etc-swift\") pod \"swift-storage-0\" (UID: \"1a7a4a62-9baa-4df8-ba83-688dc6817249\") " pod="openstack/swift-storage-0" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.748540 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c5fe-account-create-759dd" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.751911 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.760460 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fzs56" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.770847 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-db-sync-config-data\") pod \"glance-db-sync-ch9vg\" (UID: \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\") " pod="openstack/glance-db-sync-ch9vg" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.771084 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-config-data\") pod \"glance-db-sync-ch9vg\" (UID: \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\") " pod="openstack/glance-db-sync-ch9vg" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.771253 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-combined-ca-bundle\") pod \"glance-db-sync-ch9vg\" (UID: \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\") " pod="openstack/glance-db-sync-ch9vg" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.771323 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq72w\" (UniqueName: \"kubernetes.io/projected/3c6005a5-db1b-49b6-87ce-c507e10a6d21-kube-api-access-sq72w\") pod \"glance-db-sync-ch9vg\" (UID: \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\") " pod="openstack/glance-db-sync-ch9vg" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.776588 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-db-sync-config-data\") pod \"glance-db-sync-ch9vg\" (UID: \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\") " pod="openstack/glance-db-sync-ch9vg" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.777739 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-config-data\") pod \"glance-db-sync-ch9vg\" (UID: \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\") " pod="openstack/glance-db-sync-ch9vg" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.784990 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-combined-ca-bundle\") pod \"glance-db-sync-ch9vg\" (UID: \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\") " pod="openstack/glance-db-sync-ch9vg" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.802598 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq72w\" (UniqueName: \"kubernetes.io/projected/3c6005a5-db1b-49b6-87ce-c507e10a6d21-kube-api-access-sq72w\") pod \"glance-db-sync-ch9vg\" (UID: \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\") " pod="openstack/glance-db-sync-ch9vg" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.852199 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ch9vg" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.873403 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwjrg\" (UniqueName: \"kubernetes.io/projected/07773117-0d6a-4c24-a8d6-4f2f27f280d9-kube-api-access-zwjrg\") pod \"07773117-0d6a-4c24-a8d6-4f2f27f280d9\" (UID: \"07773117-0d6a-4c24-a8d6-4f2f27f280d9\") " Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.873579 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07773117-0d6a-4c24-a8d6-4f2f27f280d9-operator-scripts\") pod \"07773117-0d6a-4c24-a8d6-4f2f27f280d9\" (UID: \"07773117-0d6a-4c24-a8d6-4f2f27f280d9\") " Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.873628 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7-operator-scripts\") pod \"9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7\" (UID: \"9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7\") " Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.873733 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbmrp\" (UniqueName: \"kubernetes.io/projected/9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7-kube-api-access-fbmrp\") pod \"9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7\" (UID: \"9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7\") " Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.876111 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07773117-0d6a-4c24-a8d6-4f2f27f280d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07773117-0d6a-4c24-a8d6-4f2f27f280d9" (UID: "07773117-0d6a-4c24-a8d6-4f2f27f280d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.876901 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7" (UID: "9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.880311 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7-kube-api-access-fbmrp" (OuterVolumeSpecName: "kube-api-access-fbmrp") pod "9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7" (UID: "9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7"). InnerVolumeSpecName "kube-api-access-fbmrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.880706 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-blf4t" podUID="de344c51-a739-44dc-b0a2-914839d40a8b" containerName="ovn-controller" probeResult="failure" output=< Nov 24 11:36:31 crc kubenswrapper[4678]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 11:36:31 crc kubenswrapper[4678]: > Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.891295 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07773117-0d6a-4c24-a8d6-4f2f27f280d9-kube-api-access-zwjrg" (OuterVolumeSpecName: "kube-api-access-zwjrg") pod "07773117-0d6a-4c24-a8d6-4f2f27f280d9" (UID: "07773117-0d6a-4c24-a8d6-4f2f27f280d9"). InnerVolumeSpecName "kube-api-access-zwjrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.954094 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.954202 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xnsx2" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.983118 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07773117-0d6a-4c24-a8d6-4f2f27f280d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.983158 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.983167 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbmrp\" (UniqueName: \"kubernetes.io/projected/9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7-kube-api-access-fbmrp\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:31 crc kubenswrapper[4678]: I1124 11:36:31.983178 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwjrg\" (UniqueName: \"kubernetes.io/projected/07773117-0d6a-4c24-a8d6-4f2f27f280d9-kube-api-access-zwjrg\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.173022 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-blf4t-config-zt6d8"] Nov 24 11:36:32 crc kubenswrapper[4678]: E1124 11:36:32.173543 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7" containerName="mariadb-database-create" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.173562 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7" containerName="mariadb-database-create" Nov 24 11:36:32 crc kubenswrapper[4678]: E1124 11:36:32.173579 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07773117-0d6a-4c24-a8d6-4f2f27f280d9" containerName="mariadb-account-create" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.173586 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="07773117-0d6a-4c24-a8d6-4f2f27f280d9" containerName="mariadb-account-create" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.173921 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="07773117-0d6a-4c24-a8d6-4f2f27f280d9" containerName="mariadb-account-create" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.173950 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7" containerName="mariadb-database-create" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.174817 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.182264 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.189420 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-blf4t-config-zt6d8"] Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.255371 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c5fe-account-create-759dd" event={"ID":"07773117-0d6a-4c24-a8d6-4f2f27f280d9","Type":"ContainerDied","Data":"3f607cf661d0e8faa5777c4dc95b0abdba3dde8541e56a5639aad8b2207c0110"} Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.255418 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f607cf661d0e8faa5777c4dc95b0abdba3dde8541e56a5639aad8b2207c0110" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.255527 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c5fe-account-create-759dd" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.258000 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fzs56" event={"ID":"9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7","Type":"ContainerDied","Data":"8c9dd8099de2d9c148d753ca139c512a34255d5262271c32c84f63b79c2e32f2"} Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.258043 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c9dd8099de2d9c148d753ca139c512a34255d5262271c32c84f63b79c2e32f2" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.258676 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fzs56" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.296560 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg65j\" (UniqueName: \"kubernetes.io/projected/fb16f708-35c9-421d-af98-ef172a021f0d-kube-api-access-zg65j\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.296854 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb16f708-35c9-421d-af98-ef172a021f0d-scripts\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.296906 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fb16f708-35c9-421d-af98-ef172a021f0d-additional-scripts\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.296960 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-log-ovn\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.297012 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-run\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.297065 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-run-ovn\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.398842 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg65j\" (UniqueName: \"kubernetes.io/projected/fb16f708-35c9-421d-af98-ef172a021f0d-kube-api-access-zg65j\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.398973 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb16f708-35c9-421d-af98-ef172a021f0d-scripts\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.399017 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fb16f708-35c9-421d-af98-ef172a021f0d-additional-scripts\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.399055 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-log-ovn\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.399087 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-run\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.399117 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-run-ovn\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.399433 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-run-ovn\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.399764 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-log-ovn\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.399823 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-run\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.400509 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fb16f708-35c9-421d-af98-ef172a021f0d-additional-scripts\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.401367 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb16f708-35c9-421d-af98-ef172a021f0d-scripts\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: W1124 11:36:32.423675 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a7a4a62_9baa_4df8_ba83_688dc6817249.slice/crio-65231a12569b52160a8b99d91119f76d7c1eb27837ac3a1258d10b55f2d554b0 WatchSource:0}: Error finding container 65231a12569b52160a8b99d91119f76d7c1eb27837ac3a1258d10b55f2d554b0: Status 404 returned error can't find the container with id 65231a12569b52160a8b99d91119f76d7c1eb27837ac3a1258d10b55f2d554b0 Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.428284 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.441336 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg65j\" (UniqueName: \"kubernetes.io/projected/fb16f708-35c9-421d-af98-ef172a021f0d-kube-api-access-zg65j\") pod \"ovn-controller-blf4t-config-zt6d8\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.506581 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.732307 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ch9vg"] Nov 24 11:36:32 crc kubenswrapper[4678]: I1124 11:36:32.909737 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.151796 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-blf4t-config-zt6d8"] Nov 24 11:36:33 crc kubenswrapper[4678]: W1124 11:36:33.190973 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb16f708_35c9_421d_af98_ef172a021f0d.slice/crio-79077e143433b9e9a0040d395e16d1a527e27b818f708be7141927ad97d79094 WatchSource:0}: Error finding container 79077e143433b9e9a0040d395e16d1a527e27b818f708be7141927ad97d79094: Status 404 returned error can't find the container with id 79077e143433b9e9a0040d395e16d1a527e27b818f708be7141927ad97d79094 Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.284655 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.288192 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ch9vg" event={"ID":"3c6005a5-db1b-49b6-87ce-c507e10a6d21","Type":"ContainerStarted","Data":"ad21b0f8af6e1a79896295afb4d1023134fff0801df0de4878c3e60493370dba"} Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.293494 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.294205 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-blf4t-config-zt6d8" event={"ID":"fb16f708-35c9-421d-af98-ef172a021f0d","Type":"ContainerStarted","Data":"79077e143433b9e9a0040d395e16d1a527e27b818f708be7141927ad97d79094"} Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.298931 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerName="prometheus" containerID="cri-o://ecba69d197b24e5a5a5ba6a8e6b656b0cdcac99f6b91db39f8bfaa51c70ffb13" gracePeriod=600 Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.299213 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1a7a4a62-9baa-4df8-ba83-688dc6817249","Type":"ContainerStarted","Data":"65231a12569b52160a8b99d91119f76d7c1eb27837ac3a1258d10b55f2d554b0"} Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.299274 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerName="thanos-sidecar" containerID="cri-o://fbc8732dfc88f205b0e9a06e1051ab0a3fcd2f8dbc3c9641723f582d65bc2537" gracePeriod=600 Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.299329 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerName="config-reloader" containerID="cri-o://7db6ea9c92fe32e4e7c04e09ce85b1655c4346c3668938e461a58b83357e5232" gracePeriod=600 Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.418968 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-6wt6l"] Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.421365 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6wt6l" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.445548 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6wt6l"] Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.464270 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-6472-account-create-dmhpl"] Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.485880 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6472-account-create-dmhpl" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.495326 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.539206 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6472-account-create-dmhpl"] Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.542544 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzzrg\" (UniqueName: \"kubernetes.io/projected/6ccdb39d-cd19-45a6-aa4d-bbee44622101-kube-api-access-jzzrg\") pod \"cinder-db-create-6wt6l\" (UID: \"6ccdb39d-cd19-45a6-aa4d-bbee44622101\") " pod="openstack/cinder-db-create-6wt6l" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.542680 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ccdb39d-cd19-45a6-aa4d-bbee44622101-operator-scripts\") pod \"cinder-db-create-6wt6l\" (UID: \"6ccdb39d-cd19-45a6-aa4d-bbee44622101\") " pod="openstack/cinder-db-create-6wt6l" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.644171 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ccdb39d-cd19-45a6-aa4d-bbee44622101-operator-scripts\") pod \"cinder-db-create-6wt6l\" (UID: \"6ccdb39d-cd19-45a6-aa4d-bbee44622101\") " pod="openstack/cinder-db-create-6wt6l" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.644294 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-przxb\" (UniqueName: \"kubernetes.io/projected/cb5591ea-c50b-46c1-8ed3-e2062967d0f1-kube-api-access-przxb\") pod \"barbican-6472-account-create-dmhpl\" (UID: \"cb5591ea-c50b-46c1-8ed3-e2062967d0f1\") " pod="openstack/barbican-6472-account-create-dmhpl" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.644466 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb5591ea-c50b-46c1-8ed3-e2062967d0f1-operator-scripts\") pod \"barbican-6472-account-create-dmhpl\" (UID: \"cb5591ea-c50b-46c1-8ed3-e2062967d0f1\") " pod="openstack/barbican-6472-account-create-dmhpl" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.644571 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzzrg\" (UniqueName: \"kubernetes.io/projected/6ccdb39d-cd19-45a6-aa4d-bbee44622101-kube-api-access-jzzrg\") pod \"cinder-db-create-6wt6l\" (UID: \"6ccdb39d-cd19-45a6-aa4d-bbee44622101\") " pod="openstack/cinder-db-create-6wt6l" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.645902 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ccdb39d-cd19-45a6-aa4d-bbee44622101-operator-scripts\") pod \"cinder-db-create-6wt6l\" (UID: \"6ccdb39d-cd19-45a6-aa4d-bbee44622101\") " pod="openstack/cinder-db-create-6wt6l" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.659571 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-750d-account-create-w4fsn"] Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.661518 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-750d-account-create-w4fsn" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.667633 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.675170 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-750d-account-create-w4fsn"] Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.684283 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-fpvbh"] Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.685632 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fpvbh" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.691846 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-fpvbh"] Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.693744 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzzrg\" (UniqueName: \"kubernetes.io/projected/6ccdb39d-cd19-45a6-aa4d-bbee44622101-kube-api-access-jzzrg\") pod \"cinder-db-create-6wt6l\" (UID: \"6ccdb39d-cd19-45a6-aa4d-bbee44622101\") " pod="openstack/cinder-db-create-6wt6l" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.706502 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4dr4g"] Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.708132 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4dr4g" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.713271 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.713332 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.715477 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.715552 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cvvbb" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.746748 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4dr4g"] Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.748423 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vmqc\" (UniqueName: \"kubernetes.io/projected/3cfd80a7-5fb2-4a38-9a9b-839510edff06-kube-api-access-4vmqc\") pod \"cinder-750d-account-create-w4fsn\" (UID: \"3cfd80a7-5fb2-4a38-9a9b-839510edff06\") " pod="openstack/cinder-750d-account-create-w4fsn" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.748482 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cfd80a7-5fb2-4a38-9a9b-839510edff06-operator-scripts\") pod \"cinder-750d-account-create-w4fsn\" (UID: \"3cfd80a7-5fb2-4a38-9a9b-839510edff06\") " pod="openstack/cinder-750d-account-create-w4fsn" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.748521 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-przxb\" (UniqueName: \"kubernetes.io/projected/cb5591ea-c50b-46c1-8ed3-e2062967d0f1-kube-api-access-przxb\") pod \"barbican-6472-account-create-dmhpl\" (UID: \"cb5591ea-c50b-46c1-8ed3-e2062967d0f1\") " pod="openstack/barbican-6472-account-create-dmhpl" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.748911 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzh62\" (UniqueName: \"kubernetes.io/projected/14aebdf2-73dd-4904-a5bb-01dbe513298e-kube-api-access-pzh62\") pod \"barbican-db-create-fpvbh\" (UID: \"14aebdf2-73dd-4904-a5bb-01dbe513298e\") " pod="openstack/barbican-db-create-fpvbh" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.749107 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14aebdf2-73dd-4904-a5bb-01dbe513298e-operator-scripts\") pod \"barbican-db-create-fpvbh\" (UID: \"14aebdf2-73dd-4904-a5bb-01dbe513298e\") " pod="openstack/barbican-db-create-fpvbh" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.749163 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb5591ea-c50b-46c1-8ed3-e2062967d0f1-operator-scripts\") pod \"barbican-6472-account-create-dmhpl\" (UID: \"cb5591ea-c50b-46c1-8ed3-e2062967d0f1\") " pod="openstack/barbican-6472-account-create-dmhpl" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.750273 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb5591ea-c50b-46c1-8ed3-e2062967d0f1-operator-scripts\") pod \"barbican-6472-account-create-dmhpl\" (UID: \"cb5591ea-c50b-46c1-8ed3-e2062967d0f1\") " pod="openstack/barbican-6472-account-create-dmhpl" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.828806 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-p5q8z"] Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.834075 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dec4-account-create-q8wzh"] Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.837007 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-p5q8z"] Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.837201 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-p5q8z" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.837780 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6wt6l" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.837269 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dec4-account-create-q8wzh" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.843152 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.850467 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-przxb\" (UniqueName: \"kubernetes.io/projected/cb5591ea-c50b-46c1-8ed3-e2062967d0f1-kube-api-access-przxb\") pod \"barbican-6472-account-create-dmhpl\" (UID: \"cb5591ea-c50b-46c1-8ed3-e2062967d0f1\") " pod="openstack/barbican-6472-account-create-dmhpl" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.851341 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vmqc\" (UniqueName: \"kubernetes.io/projected/3cfd80a7-5fb2-4a38-9a9b-839510edff06-kube-api-access-4vmqc\") pod \"cinder-750d-account-create-w4fsn\" (UID: \"3cfd80a7-5fb2-4a38-9a9b-839510edff06\") " pod="openstack/cinder-750d-account-create-w4fsn" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.851391 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cfd80a7-5fb2-4a38-9a9b-839510edff06-operator-scripts\") pod \"cinder-750d-account-create-w4fsn\" (UID: \"3cfd80a7-5fb2-4a38-9a9b-839510edff06\") " pod="openstack/cinder-750d-account-create-w4fsn" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.851456 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzh62\" (UniqueName: \"kubernetes.io/projected/14aebdf2-73dd-4904-a5bb-01dbe513298e-kube-api-access-pzh62\") pod \"barbican-db-create-fpvbh\" (UID: \"14aebdf2-73dd-4904-a5bb-01dbe513298e\") " pod="openstack/barbican-db-create-fpvbh" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.851498 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95xgc\" (UniqueName: \"kubernetes.io/projected/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-kube-api-access-95xgc\") pod \"keystone-db-sync-4dr4g\" (UID: \"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87\") " pod="openstack/keystone-db-sync-4dr4g" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.851531 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14aebdf2-73dd-4904-a5bb-01dbe513298e-operator-scripts\") pod \"barbican-db-create-fpvbh\" (UID: \"14aebdf2-73dd-4904-a5bb-01dbe513298e\") " pod="openstack/barbican-db-create-fpvbh" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.851546 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-combined-ca-bundle\") pod \"keystone-db-sync-4dr4g\" (UID: \"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87\") " pod="openstack/keystone-db-sync-4dr4g" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.851582 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-config-data\") pod \"keystone-db-sync-4dr4g\" (UID: \"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87\") " pod="openstack/keystone-db-sync-4dr4g" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.852510 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14aebdf2-73dd-4904-a5bb-01dbe513298e-operator-scripts\") pod \"barbican-db-create-fpvbh\" (UID: \"14aebdf2-73dd-4904-a5bb-01dbe513298e\") " pod="openstack/barbican-db-create-fpvbh" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.854028 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cfd80a7-5fb2-4a38-9a9b-839510edff06-operator-scripts\") pod \"cinder-750d-account-create-w4fsn\" (UID: \"3cfd80a7-5fb2-4a38-9a9b-839510edff06\") " pod="openstack/cinder-750d-account-create-w4fsn" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.856801 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6472-account-create-dmhpl" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.907258 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzh62\" (UniqueName: \"kubernetes.io/projected/14aebdf2-73dd-4904-a5bb-01dbe513298e-kube-api-access-pzh62\") pod \"barbican-db-create-fpvbh\" (UID: \"14aebdf2-73dd-4904-a5bb-01dbe513298e\") " pod="openstack/barbican-db-create-fpvbh" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.914495 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vmqc\" (UniqueName: \"kubernetes.io/projected/3cfd80a7-5fb2-4a38-9a9b-839510edff06-kube-api-access-4vmqc\") pod \"cinder-750d-account-create-w4fsn\" (UID: \"3cfd80a7-5fb2-4a38-9a9b-839510edff06\") " pod="openstack/cinder-750d-account-create-w4fsn" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.950009 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dec4-account-create-q8wzh"] Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.953929 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b637f29-368e-458f-93dd-77f478100f0b-operator-scripts\") pod \"neutron-dec4-account-create-q8wzh\" (UID: \"2b637f29-368e-458f-93dd-77f478100f0b\") " pod="openstack/neutron-dec4-account-create-q8wzh" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.954504 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44fm8\" (UniqueName: \"kubernetes.io/projected/2b637f29-368e-458f-93dd-77f478100f0b-kube-api-access-44fm8\") pod \"neutron-dec4-account-create-q8wzh\" (UID: \"2b637f29-368e-458f-93dd-77f478100f0b\") " pod="openstack/neutron-dec4-account-create-q8wzh" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.954558 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95xgc\" (UniqueName: \"kubernetes.io/projected/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-kube-api-access-95xgc\") pod \"keystone-db-sync-4dr4g\" (UID: \"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87\") " pod="openstack/keystone-db-sync-4dr4g" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.954592 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1811771b-0c1b-4767-b4e2-ec8b52d12f18-operator-scripts\") pod \"heat-db-create-p5q8z\" (UID: \"1811771b-0c1b-4767-b4e2-ec8b52d12f18\") " pod="openstack/heat-db-create-p5q8z" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.954629 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-combined-ca-bundle\") pod \"keystone-db-sync-4dr4g\" (UID: \"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87\") " pod="openstack/keystone-db-sync-4dr4g" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.954654 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbt5p\" (UniqueName: \"kubernetes.io/projected/1811771b-0c1b-4767-b4e2-ec8b52d12f18-kube-api-access-fbt5p\") pod \"heat-db-create-p5q8z\" (UID: \"1811771b-0c1b-4767-b4e2-ec8b52d12f18\") " pod="openstack/heat-db-create-p5q8z" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.954712 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-config-data\") pod \"keystone-db-sync-4dr4g\" (UID: \"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87\") " pod="openstack/keystone-db-sync-4dr4g" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.970072 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-combined-ca-bundle\") pod \"keystone-db-sync-4dr4g\" (UID: \"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87\") " pod="openstack/keystone-db-sync-4dr4g" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.972231 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-b0fb-account-create-w4x74"] Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.973922 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b0fb-account-create-w4x74" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.975868 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-config-data\") pod \"keystone-db-sync-4dr4g\" (UID: \"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87\") " pod="openstack/keystone-db-sync-4dr4g" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.977040 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.983909 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-750d-account-create-w4fsn" Nov 24 11:36:33 crc kubenswrapper[4678]: I1124 11:36:33.985216 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95xgc\" (UniqueName: \"kubernetes.io/projected/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-kube-api-access-95xgc\") pod \"keystone-db-sync-4dr4g\" (UID: \"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87\") " pod="openstack/keystone-db-sync-4dr4g" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.000730 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fpvbh" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.008984 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-q26st"] Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.010348 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q26st" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.029723 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4dr4g" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.059176 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b637f29-368e-458f-93dd-77f478100f0b-operator-scripts\") pod \"neutron-dec4-account-create-q8wzh\" (UID: \"2b637f29-368e-458f-93dd-77f478100f0b\") " pod="openstack/neutron-dec4-account-create-q8wzh" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.059345 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44fm8\" (UniqueName: \"kubernetes.io/projected/2b637f29-368e-458f-93dd-77f478100f0b-kube-api-access-44fm8\") pod \"neutron-dec4-account-create-q8wzh\" (UID: \"2b637f29-368e-458f-93dd-77f478100f0b\") " pod="openstack/neutron-dec4-account-create-q8wzh" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.059413 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1811771b-0c1b-4767-b4e2-ec8b52d12f18-operator-scripts\") pod \"heat-db-create-p5q8z\" (UID: \"1811771b-0c1b-4767-b4e2-ec8b52d12f18\") " pod="openstack/heat-db-create-p5q8z" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.059462 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbt5p\" (UniqueName: \"kubernetes.io/projected/1811771b-0c1b-4767-b4e2-ec8b52d12f18-kube-api-access-fbt5p\") pod \"heat-db-create-p5q8z\" (UID: \"1811771b-0c1b-4767-b4e2-ec8b52d12f18\") " pod="openstack/heat-db-create-p5q8z" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.059500 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvcxf\" (UniqueName: \"kubernetes.io/projected/75a467ed-5cfa-44da-9e07-7902433ef5a0-kube-api-access-mvcxf\") pod \"heat-b0fb-account-create-w4x74\" (UID: \"75a467ed-5cfa-44da-9e07-7902433ef5a0\") " pod="openstack/heat-b0fb-account-create-w4x74" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.059524 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a467ed-5cfa-44da-9e07-7902433ef5a0-operator-scripts\") pod \"heat-b0fb-account-create-w4x74\" (UID: \"75a467ed-5cfa-44da-9e07-7902433ef5a0\") " pod="openstack/heat-b0fb-account-create-w4x74" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.062299 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b637f29-368e-458f-93dd-77f478100f0b-operator-scripts\") pod \"neutron-dec4-account-create-q8wzh\" (UID: \"2b637f29-368e-458f-93dd-77f478100f0b\") " pod="openstack/neutron-dec4-account-create-q8wzh" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.062400 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1811771b-0c1b-4767-b4e2-ec8b52d12f18-operator-scripts\") pod \"heat-db-create-p5q8z\" (UID: \"1811771b-0c1b-4767-b4e2-ec8b52d12f18\") " pod="openstack/heat-db-create-p5q8z" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.082892 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-q26st"] Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.083772 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbt5p\" (UniqueName: \"kubernetes.io/projected/1811771b-0c1b-4767-b4e2-ec8b52d12f18-kube-api-access-fbt5p\") pod \"heat-db-create-p5q8z\" (UID: \"1811771b-0c1b-4767-b4e2-ec8b52d12f18\") " pod="openstack/heat-db-create-p5q8z" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.085660 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44fm8\" (UniqueName: \"kubernetes.io/projected/2b637f29-368e-458f-93dd-77f478100f0b-kube-api-access-44fm8\") pod \"neutron-dec4-account-create-q8wzh\" (UID: \"2b637f29-368e-458f-93dd-77f478100f0b\") " pod="openstack/neutron-dec4-account-create-q8wzh" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.096549 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-b0fb-account-create-w4x74"] Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.166312 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9473500-25d5-4b49-a95a-c4b1de4ac854-operator-scripts\") pod \"neutron-db-create-q26st\" (UID: \"c9473500-25d5-4b49-a95a-c4b1de4ac854\") " pod="openstack/neutron-db-create-q26st" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.166432 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvcxf\" (UniqueName: \"kubernetes.io/projected/75a467ed-5cfa-44da-9e07-7902433ef5a0-kube-api-access-mvcxf\") pod \"heat-b0fb-account-create-w4x74\" (UID: \"75a467ed-5cfa-44da-9e07-7902433ef5a0\") " pod="openstack/heat-b0fb-account-create-w4x74" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.166455 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a467ed-5cfa-44da-9e07-7902433ef5a0-operator-scripts\") pod \"heat-b0fb-account-create-w4x74\" (UID: \"75a467ed-5cfa-44da-9e07-7902433ef5a0\") " pod="openstack/heat-b0fb-account-create-w4x74" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.166523 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct656\" (UniqueName: \"kubernetes.io/projected/c9473500-25d5-4b49-a95a-c4b1de4ac854-kube-api-access-ct656\") pod \"neutron-db-create-q26st\" (UID: \"c9473500-25d5-4b49-a95a-c4b1de4ac854\") " pod="openstack/neutron-db-create-q26st" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.168868 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a467ed-5cfa-44da-9e07-7902433ef5a0-operator-scripts\") pod \"heat-b0fb-account-create-w4x74\" (UID: \"75a467ed-5cfa-44da-9e07-7902433ef5a0\") " pod="openstack/heat-b0fb-account-create-w4x74" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.186861 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.191471 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.198463 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.206272 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.226222 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvcxf\" (UniqueName: \"kubernetes.io/projected/75a467ed-5cfa-44da-9e07-7902433ef5a0-kube-api-access-mvcxf\") pod \"heat-b0fb-account-create-w4x74\" (UID: \"75a467ed-5cfa-44da-9e07-7902433ef5a0\") " pod="openstack/heat-b0fb-account-create-w4x74" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.275712 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70557cb4-7672-4047-a601-1cf7723d8c82-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"70557cb4-7672-4047-a601-1cf7723d8c82\") " pod="openstack/mysqld-exporter-0" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.275801 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70557cb4-7672-4047-a601-1cf7723d8c82-config-data\") pod \"mysqld-exporter-0\" (UID: \"70557cb4-7672-4047-a601-1cf7723d8c82\") " pod="openstack/mysqld-exporter-0" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.275837 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9473500-25d5-4b49-a95a-c4b1de4ac854-operator-scripts\") pod \"neutron-db-create-q26st\" (UID: \"c9473500-25d5-4b49-a95a-c4b1de4ac854\") " pod="openstack/neutron-db-create-q26st" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.275950 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsd7l\" (UniqueName: \"kubernetes.io/projected/70557cb4-7672-4047-a601-1cf7723d8c82-kube-api-access-tsd7l\") pod \"mysqld-exporter-0\" (UID: \"70557cb4-7672-4047-a601-1cf7723d8c82\") " pod="openstack/mysqld-exporter-0" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.275975 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct656\" (UniqueName: \"kubernetes.io/projected/c9473500-25d5-4b49-a95a-c4b1de4ac854-kube-api-access-ct656\") pod \"neutron-db-create-q26st\" (UID: \"c9473500-25d5-4b49-a95a-c4b1de4ac854\") " pod="openstack/neutron-db-create-q26st" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.277047 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9473500-25d5-4b49-a95a-c4b1de4ac854-operator-scripts\") pod \"neutron-db-create-q26st\" (UID: \"c9473500-25d5-4b49-a95a-c4b1de4ac854\") " pod="openstack/neutron-db-create-q26st" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.311654 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct656\" (UniqueName: \"kubernetes.io/projected/c9473500-25d5-4b49-a95a-c4b1de4ac854-kube-api-access-ct656\") pod \"neutron-db-create-q26st\" (UID: \"c9473500-25d5-4b49-a95a-c4b1de4ac854\") " pod="openstack/neutron-db-create-q26st" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.345641 4678 generic.go:334] "Generic (PLEG): container finished" podID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerID="fbc8732dfc88f205b0e9a06e1051ab0a3fcd2f8dbc3c9641723f582d65bc2537" exitCode=0 Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.345700 4678 generic.go:334] "Generic (PLEG): container finished" podID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerID="7db6ea9c92fe32e4e7c04e09ce85b1655c4346c3668938e461a58b83357e5232" exitCode=0 Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.345712 4678 generic.go:334] "Generic (PLEG): container finished" podID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerID="ecba69d197b24e5a5a5ba6a8e6b656b0cdcac99f6b91db39f8bfaa51c70ffb13" exitCode=0 Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.345766 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f794d99b-6371-445e-9bb9-74f0bdbee6bc","Type":"ContainerDied","Data":"fbc8732dfc88f205b0e9a06e1051ab0a3fcd2f8dbc3c9641723f582d65bc2537"} Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.345815 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f794d99b-6371-445e-9bb9-74f0bdbee6bc","Type":"ContainerDied","Data":"7db6ea9c92fe32e4e7c04e09ce85b1655c4346c3668938e461a58b83357e5232"} Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.345831 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f794d99b-6371-445e-9bb9-74f0bdbee6bc","Type":"ContainerDied","Data":"ecba69d197b24e5a5a5ba6a8e6b656b0cdcac99f6b91db39f8bfaa51c70ffb13"} Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.351065 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dec4-account-create-q8wzh" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.354096 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-blf4t-config-zt6d8" event={"ID":"fb16f708-35c9-421d-af98-ef172a021f0d","Type":"ContainerStarted","Data":"7742601b4251af3d976d5e6333202f63a506b65a0212820e600bc68a6bf07e78"} Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.370322 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-p5q8z" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.380029 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsd7l\" (UniqueName: \"kubernetes.io/projected/70557cb4-7672-4047-a601-1cf7723d8c82-kube-api-access-tsd7l\") pod \"mysqld-exporter-0\" (UID: \"70557cb4-7672-4047-a601-1cf7723d8c82\") " pod="openstack/mysqld-exporter-0" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.380162 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70557cb4-7672-4047-a601-1cf7723d8c82-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"70557cb4-7672-4047-a601-1cf7723d8c82\") " pod="openstack/mysqld-exporter-0" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.380273 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70557cb4-7672-4047-a601-1cf7723d8c82-config-data\") pod \"mysqld-exporter-0\" (UID: \"70557cb4-7672-4047-a601-1cf7723d8c82\") " pod="openstack/mysqld-exporter-0" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.391273 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70557cb4-7672-4047-a601-1cf7723d8c82-config-data\") pod \"mysqld-exporter-0\" (UID: \"70557cb4-7672-4047-a601-1cf7723d8c82\") " pod="openstack/mysqld-exporter-0" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.393184 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-blf4t-config-zt6d8" podStartSLOduration=2.393162064 podStartE2EDuration="2.393162064s" podCreationTimestamp="2025-11-24 11:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:36:34.38809548 +0000 UTC m=+1205.319155129" watchObservedRunningTime="2025-11-24 11:36:34.393162064 +0000 UTC m=+1205.324221703" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.403935 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70557cb4-7672-4047-a601-1cf7723d8c82-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"70557cb4-7672-4047-a601-1cf7723d8c82\") " pod="openstack/mysqld-exporter-0" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.407325 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b0fb-account-create-w4x74" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.407852 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q26st" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.418914 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsd7l\" (UniqueName: \"kubernetes.io/projected/70557cb4-7672-4047-a601-1cf7723d8c82-kube-api-access-tsd7l\") pod \"mysqld-exporter-0\" (UID: \"70557cb4-7672-4047-a601-1cf7723d8c82\") " pod="openstack/mysqld-exporter-0" Nov 24 11:36:34 crc kubenswrapper[4678]: I1124 11:36:34.535984 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 11:36:35 crc kubenswrapper[4678]: I1124 11:36:35.368829 4678 generic.go:334] "Generic (PLEG): container finished" podID="fb16f708-35c9-421d-af98-ef172a021f0d" containerID="7742601b4251af3d976d5e6333202f63a506b65a0212820e600bc68a6bf07e78" exitCode=0 Nov 24 11:36:35 crc kubenswrapper[4678]: I1124 11:36:35.369438 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-blf4t-config-zt6d8" event={"ID":"fb16f708-35c9-421d-af98-ef172a021f0d","Type":"ContainerDied","Data":"7742601b4251af3d976d5e6333202f63a506b65a0212820e600bc68a6bf07e78"} Nov 24 11:36:35 crc kubenswrapper[4678]: I1124 11:36:35.511141 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-fpvbh"] Nov 24 11:36:35 crc kubenswrapper[4678]: I1124 11:36:35.537469 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6wt6l"] Nov 24 11:36:35 crc kubenswrapper[4678]: I1124 11:36:35.544943 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-750d-account-create-w4fsn"] Nov 24 11:36:35 crc kubenswrapper[4678]: I1124 11:36:35.552206 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6472-account-create-dmhpl"] Nov 24 11:36:35 crc kubenswrapper[4678]: I1124 11:36:35.617435 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4dr4g"] Nov 24 11:36:35 crc kubenswrapper[4678]: W1124 11:36:35.673313 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14aebdf2_73dd_4904_a5bb_01dbe513298e.slice/crio-1f5dbaf462657ecb6561f5f5a5bafee2d4663cfdda2c971fa0c5faabfae2e94d WatchSource:0}: Error finding container 1f5dbaf462657ecb6561f5f5a5bafee2d4663cfdda2c971fa0c5faabfae2e94d: Status 404 returned error can't find the container with id 1f5dbaf462657ecb6561f5f5a5bafee2d4663cfdda2c971fa0c5faabfae2e94d Nov 24 11:36:35 crc kubenswrapper[4678]: I1124 11:36:35.728779 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:36:35 crc kubenswrapper[4678]: I1124 11:36:35.841188 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-p5q8z"] Nov 24 11:36:35 crc kubenswrapper[4678]: W1124 11:36:35.890124 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1811771b_0c1b_4767_b4e2_ec8b52d12f18.slice/crio-dca1e0fe423722a5fbcac9d3410ae6e70d4d3c1d21688fae6a626ef5c68909c4 WatchSource:0}: Error finding container dca1e0fe423722a5fbcac9d3410ae6e70d4d3c1d21688fae6a626ef5c68909c4: Status 404 returned error can't find the container with id dca1e0fe423722a5fbcac9d3410ae6e70d4d3c1d21688fae6a626ef5c68909c4 Nov 24 11:36:35 crc kubenswrapper[4678]: I1124 11:36:35.978755 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dec4-account-create-q8wzh"] Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.010434 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: W1124 11:36:36.025606 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b637f29_368e_458f_93dd_77f478100f0b.slice/crio-98942b3678d29fd9f0831b9cb694862e880f2d63785ee9c700b235f47f418049 WatchSource:0}: Error finding container 98942b3678d29fd9f0831b9cb694862e880f2d63785ee9c700b235f47f418049: Status 404 returned error can't find the container with id 98942b3678d29fd9f0831b9cb694862e880f2d63785ee9c700b235f47f418049 Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.132354 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-config\") pod \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.132470 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-web-config\") pod \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.132514 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-thanos-prometheus-http-client-file\") pod \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.132570 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f794d99b-6371-445e-9bb9-74f0bdbee6bc-prometheus-metric-storage-rulefiles-0\") pod \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.132710 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\") pod \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.132783 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f794d99b-6371-445e-9bb9-74f0bdbee6bc-tls-assets\") pod \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.132839 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsnrd\" (UniqueName: \"kubernetes.io/projected/f794d99b-6371-445e-9bb9-74f0bdbee6bc-kube-api-access-nsnrd\") pod \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.132884 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f794d99b-6371-445e-9bb9-74f0bdbee6bc-config-out\") pod \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\" (UID: \"f794d99b-6371-445e-9bb9-74f0bdbee6bc\") " Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.134240 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f794d99b-6371-445e-9bb9-74f0bdbee6bc-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "f794d99b-6371-445e-9bb9-74f0bdbee6bc" (UID: "f794d99b-6371-445e-9bb9-74f0bdbee6bc"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.152464 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f794d99b-6371-445e-9bb9-74f0bdbee6bc-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "f794d99b-6371-445e-9bb9-74f0bdbee6bc" (UID: "f794d99b-6371-445e-9bb9-74f0bdbee6bc"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.156149 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f794d99b-6371-445e-9bb9-74f0bdbee6bc-config-out" (OuterVolumeSpecName: "config-out") pod "f794d99b-6371-445e-9bb9-74f0bdbee6bc" (UID: "f794d99b-6371-445e-9bb9-74f0bdbee6bc"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.167800 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-config" (OuterVolumeSpecName: "config") pod "f794d99b-6371-445e-9bb9-74f0bdbee6bc" (UID: "f794d99b-6371-445e-9bb9-74f0bdbee6bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.167850 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "f794d99b-6371-445e-9bb9-74f0bdbee6bc" (UID: "f794d99b-6371-445e-9bb9-74f0bdbee6bc"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.199950 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f794d99b-6371-445e-9bb9-74f0bdbee6bc-kube-api-access-nsnrd" (OuterVolumeSpecName: "kube-api-access-nsnrd") pod "f794d99b-6371-445e-9bb9-74f0bdbee6bc" (UID: "f794d99b-6371-445e-9bb9-74f0bdbee6bc"). InnerVolumeSpecName "kube-api-access-nsnrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.238178 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.238204 4678 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.238216 4678 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f794d99b-6371-445e-9bb9-74f0bdbee6bc-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.238229 4678 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f794d99b-6371-445e-9bb9-74f0bdbee6bc-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.238243 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsnrd\" (UniqueName: \"kubernetes.io/projected/f794d99b-6371-445e-9bb9-74f0bdbee6bc-kube-api-access-nsnrd\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.238251 4678 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f794d99b-6371-445e-9bb9-74f0bdbee6bc-config-out\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.248445 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "f794d99b-6371-445e-9bb9-74f0bdbee6bc" (UID: "f794d99b-6371-445e-9bb9-74f0bdbee6bc"). InnerVolumeSpecName "pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.340940 4678 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\") on node \"crc\" " Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.377324 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-q26st"] Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.414550 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-web-config" (OuterVolumeSpecName: "web-config") pod "f794d99b-6371-445e-9bb9-74f0bdbee6bc" (UID: "f794d99b-6371-445e-9bb9-74f0bdbee6bc"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.421209 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6wt6l" event={"ID":"6ccdb39d-cd19-45a6-aa4d-bbee44622101","Type":"ContainerStarted","Data":"802e34290ca8c76a143146efda4f7b29fc0deaf118f44d4817c5ca4d4b48cc7c"} Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.422880 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4dr4g" event={"ID":"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87","Type":"ContainerStarted","Data":"42fef467f59c5193188b9c2aaf11821b06f9e982a2f28f867f7a269e7b4e79b9"} Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.429308 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.429297 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f794d99b-6371-445e-9bb9-74f0bdbee6bc","Type":"ContainerDied","Data":"d61fc5e8e03ee1bf1870bd41700b1fb19a2db9b2487241e023e9a36e8572cea7"} Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.429708 4678 scope.go:117] "RemoveContainer" containerID="fbc8732dfc88f205b0e9a06e1051ab0a3fcd2f8dbc3c9641723f582d65bc2537" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.432359 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fpvbh" event={"ID":"14aebdf2-73dd-4904-a5bb-01dbe513298e","Type":"ContainerStarted","Data":"1f5dbaf462657ecb6561f5f5a5bafee2d4663cfdda2c971fa0c5faabfae2e94d"} Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.443054 4678 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f794d99b-6371-445e-9bb9-74f0bdbee6bc-web-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.454843 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-750d-account-create-w4fsn" event={"ID":"3cfd80a7-5fb2-4a38-9a9b-839510edff06","Type":"ContainerStarted","Data":"a54ea3f4e79d7bf09374273ecd4f3156d67b0becf72706912c9c799683b22715"} Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.460757 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-p5q8z" event={"ID":"1811771b-0c1b-4767-b4e2-ec8b52d12f18","Type":"ContainerStarted","Data":"dca1e0fe423722a5fbcac9d3410ae6e70d4d3c1d21688fae6a626ef5c68909c4"} Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.463200 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6472-account-create-dmhpl" event={"ID":"cb5591ea-c50b-46c1-8ed3-e2062967d0f1","Type":"ContainerStarted","Data":"af6198ec161c7268d48c5ae971e8300eac88f02d364f12ad0f1781ab5a15d8db"} Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.464846 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dec4-account-create-q8wzh" event={"ID":"2b637f29-368e-458f-93dd-77f478100f0b","Type":"ContainerStarted","Data":"98942b3678d29fd9f0831b9cb694862e880f2d63785ee9c700b235f47f418049"} Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.483822 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.498689 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.518386 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 11:36:36 crc kubenswrapper[4678]: E1124 11:36:36.518953 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerName="config-reloader" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.518971 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerName="config-reloader" Nov 24 11:36:36 crc kubenswrapper[4678]: E1124 11:36:36.519005 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerName="thanos-sidecar" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.519011 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerName="thanos-sidecar" Nov 24 11:36:36 crc kubenswrapper[4678]: E1124 11:36:36.519042 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerName="prometheus" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.519055 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerName="prometheus" Nov 24 11:36:36 crc kubenswrapper[4678]: E1124 11:36:36.519072 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerName="init-config-reloader" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.519079 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerName="init-config-reloader" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.526511 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerName="config-reloader" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.526564 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerName="thanos-sidecar" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.526576 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerName="prometheus" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.531735 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.538064 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-vq5ht" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.542053 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.542305 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.543354 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.543510 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.558343 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.562101 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.565041 4678 scope.go:117] "RemoveContainer" containerID="7db6ea9c92fe32e4e7c04e09ce85b1655c4346c3668938e461a58b83357e5232" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.568731 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.604038 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-b0fb-account-create-w4x74"] Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.650984 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.651091 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.651143 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.651223 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.651246 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.651281 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.651306 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.651323 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.651377 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.651397 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdvsl\" (UniqueName: \"kubernetes.io/projected/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-kube-api-access-qdvsl\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.689127 4678 scope.go:117] "RemoveContainer" containerID="ecba69d197b24e5a5a5ba6a8e6b656b0cdcac99f6b91db39f8bfaa51c70ffb13" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.727965 4678 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.728170 4678 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b") on node "crc" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.755749 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.756256 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdvsl\" (UniqueName: \"kubernetes.io/projected/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-kube-api-access-qdvsl\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.756340 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.758127 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.758236 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.758361 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.758397 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.758428 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.758497 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.758519 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.758541 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.760606 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.788067 4678 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.788138 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/57a7f3ff30c2cd09ff1e8e65689295a1eec29ca4dace6e801961241a67275580/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.814773 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.820114 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.823283 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.826173 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.826958 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.833374 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.838018 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.841076 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.842496 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdvsl\" (UniqueName: \"kubernetes.io/projected/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-kube-api-access-qdvsl\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.891136 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f7c3f10-6a91-4d74-9ac8-c343467f595b\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.904364 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8b2f0329-4af5-4426-a61e-2b3b1deff8a7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b2f0329-4af5-4426-a61e-2b3b1deff8a7\") " pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.932067 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-blf4t" Nov 24 11:36:36 crc kubenswrapper[4678]: I1124 11:36:36.948992 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.202881 4678 scope.go:117] "RemoveContainer" containerID="9379a259a4a78313d7d9ff5af56185af8a366751b747318d10a88f696dab3fed" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.218501 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.275091 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-run\") pod \"fb16f708-35c9-421d-af98-ef172a021f0d\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.275187 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fb16f708-35c9-421d-af98-ef172a021f0d-additional-scripts\") pod \"fb16f708-35c9-421d-af98-ef172a021f0d\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.275321 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb16f708-35c9-421d-af98-ef172a021f0d-scripts\") pod \"fb16f708-35c9-421d-af98-ef172a021f0d\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.275481 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg65j\" (UniqueName: \"kubernetes.io/projected/fb16f708-35c9-421d-af98-ef172a021f0d-kube-api-access-zg65j\") pod \"fb16f708-35c9-421d-af98-ef172a021f0d\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.275777 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-log-ovn\") pod \"fb16f708-35c9-421d-af98-ef172a021f0d\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.275856 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-run-ovn\") pod \"fb16f708-35c9-421d-af98-ef172a021f0d\" (UID: \"fb16f708-35c9-421d-af98-ef172a021f0d\") " Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.276537 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "fb16f708-35c9-421d-af98-ef172a021f0d" (UID: "fb16f708-35c9-421d-af98-ef172a021f0d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.277596 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb16f708-35c9-421d-af98-ef172a021f0d-scripts" (OuterVolumeSpecName: "scripts") pod "fb16f708-35c9-421d-af98-ef172a021f0d" (UID: "fb16f708-35c9-421d-af98-ef172a021f0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.278031 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-run" (OuterVolumeSpecName: "var-run") pod "fb16f708-35c9-421d-af98-ef172a021f0d" (UID: "fb16f708-35c9-421d-af98-ef172a021f0d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.284816 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "fb16f708-35c9-421d-af98-ef172a021f0d" (UID: "fb16f708-35c9-421d-af98-ef172a021f0d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.291230 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb16f708-35c9-421d-af98-ef172a021f0d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "fb16f708-35c9-421d-af98-ef172a021f0d" (UID: "fb16f708-35c9-421d-af98-ef172a021f0d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.297220 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb16f708-35c9-421d-af98-ef172a021f0d-kube-api-access-zg65j" (OuterVolumeSpecName: "kube-api-access-zg65j") pod "fb16f708-35c9-421d-af98-ef172a021f0d" (UID: "fb16f708-35c9-421d-af98-ef172a021f0d"). InnerVolumeSpecName "kube-api-access-zg65j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.379086 4678 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.379116 4678 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.379125 4678 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fb16f708-35c9-421d-af98-ef172a021f0d-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.379135 4678 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fb16f708-35c9-421d-af98-ef172a021f0d-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.379146 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb16f708-35c9-421d-af98-ef172a021f0d-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.379154 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg65j\" (UniqueName: \"kubernetes.io/projected/fb16f708-35c9-421d-af98-ef172a021f0d-kube-api-access-zg65j\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.528770 4678 generic.go:334] "Generic (PLEG): container finished" podID="cb5591ea-c50b-46c1-8ed3-e2062967d0f1" containerID="9b4e0692ae8403cfbc2cd50df22fd5121d72960e267d6fd59c086085a8776297" exitCode=0 Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.529260 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6472-account-create-dmhpl" event={"ID":"cb5591ea-c50b-46c1-8ed3-e2062967d0f1","Type":"ContainerDied","Data":"9b4e0692ae8403cfbc2cd50df22fd5121d72960e267d6fd59c086085a8776297"} Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.540275 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-blf4t-config-zt6d8"] Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.543845 4678 generic.go:334] "Generic (PLEG): container finished" podID="6ccdb39d-cd19-45a6-aa4d-bbee44622101" containerID="9e487f59561fa870b6b16aefba3eb5a2c6fe89266d2547405673166244a1edda" exitCode=0 Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.543929 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6wt6l" event={"ID":"6ccdb39d-cd19-45a6-aa4d-bbee44622101","Type":"ContainerDied","Data":"9e487f59561fa870b6b16aefba3eb5a2c6fe89266d2547405673166244a1edda"} Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.578730 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-blf4t-config-zt6d8"] Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.587034 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-blf4t-config-zt6d8" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.587052 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79077e143433b9e9a0040d395e16d1a527e27b818f708be7141927ad97d79094" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.599168 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b0fb-account-create-w4x74" event={"ID":"75a467ed-5cfa-44da-9e07-7902433ef5a0","Type":"ContainerStarted","Data":"dbc46155734fe15a43f50846085999892a50789af67edb635c4017f79e0edc59"} Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.652515 4678 generic.go:334] "Generic (PLEG): container finished" podID="14aebdf2-73dd-4904-a5bb-01dbe513298e" containerID="acf2e9eb1542e13e2a26b0e3eac8b39ba506b86eb9da7dcb7502cb8a50b56f16" exitCode=0 Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.652582 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fpvbh" event={"ID":"14aebdf2-73dd-4904-a5bb-01dbe513298e","Type":"ContainerDied","Data":"acf2e9eb1542e13e2a26b0e3eac8b39ba506b86eb9da7dcb7502cb8a50b56f16"} Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.681878 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"70557cb4-7672-4047-a601-1cf7723d8c82","Type":"ContainerStarted","Data":"418204d2e88cefb56ad0bb50e697f17e22c671e977beee302179c3faf56f5deb"} Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.723924 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1a7a4a62-9baa-4df8-ba83-688dc6817249","Type":"ContainerStarted","Data":"cf3a26a1e7cd14dcae1195a6000051337faf2696cd23a5a5141b9bca91072651"} Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.743972 4678 generic.go:334] "Generic (PLEG): container finished" podID="3cfd80a7-5fb2-4a38-9a9b-839510edff06" containerID="f0e353e36f389b9baf5c230fa637960757b888b4752a24d4d9efa1a09723b176" exitCode=0 Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.744070 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-750d-account-create-w4fsn" event={"ID":"3cfd80a7-5fb2-4a38-9a9b-839510edff06","Type":"ContainerDied","Data":"f0e353e36f389b9baf5c230fa637960757b888b4752a24d4d9efa1a09723b176"} Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.753207 4678 generic.go:334] "Generic (PLEG): container finished" podID="1811771b-0c1b-4767-b4e2-ec8b52d12f18" containerID="4f902861562f9d0d1dd94162eea5081f397c0f5c9593cbee0475a01a30978c98" exitCode=0 Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.753300 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-p5q8z" event={"ID":"1811771b-0c1b-4767-b4e2-ec8b52d12f18","Type":"ContainerDied","Data":"4f902861562f9d0d1dd94162eea5081f397c0f5c9593cbee0475a01a30978c98"} Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.755655 4678 generic.go:334] "Generic (PLEG): container finished" podID="2b637f29-368e-458f-93dd-77f478100f0b" containerID="2b94864f00a8fe20a194b3abcabaf4f2d1511aa9071b11f16f78c1d89886ab9e" exitCode=0 Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.755709 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dec4-account-create-q8wzh" event={"ID":"2b637f29-368e-458f-93dd-77f478100f0b","Type":"ContainerDied","Data":"2b94864f00a8fe20a194b3abcabaf4f2d1511aa9071b11f16f78c1d89886ab9e"} Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.776376 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-q26st" event={"ID":"c9473500-25d5-4b49-a95a-c4b1de4ac854","Type":"ContainerStarted","Data":"edfcdbfa1575f069b3c0424d2357da4fa31719feed7950a2ee8ded278602de75"} Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.887097 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.138:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.960280 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f794d99b-6371-445e-9bb9-74f0bdbee6bc" path="/var/lib/kubelet/pods/f794d99b-6371-445e-9bb9-74f0bdbee6bc/volumes" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.963824 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb16f708-35c9-421d-af98-ef172a021f0d" path="/var/lib/kubelet/pods/fb16f708-35c9-421d-af98-ef172a021f0d/volumes" Nov 24 11:36:37 crc kubenswrapper[4678]: I1124 11:36:37.964718 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 11:36:37 crc kubenswrapper[4678]: W1124 11:36:37.982973 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b2f0329_4af5_4426_a61e_2b3b1deff8a7.slice/crio-3bf1befaaf396a520c5c7b4278c058f45568fc5bd164488826630650c33ab301 WatchSource:0}: Error finding container 3bf1befaaf396a520c5c7b4278c058f45568fc5bd164488826630650c33ab301: Status 404 returned error can't find the container with id 3bf1befaaf396a520c5c7b4278c058f45568fc5bd164488826630650c33ab301 Nov 24 11:36:38 crc kubenswrapper[4678]: I1124 11:36:38.818601 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b2f0329-4af5-4426-a61e-2b3b1deff8a7","Type":"ContainerStarted","Data":"3bf1befaaf396a520c5c7b4278c058f45568fc5bd164488826630650c33ab301"} Nov 24 11:36:38 crc kubenswrapper[4678]: I1124 11:36:38.832968 4678 generic.go:334] "Generic (PLEG): container finished" podID="c9473500-25d5-4b49-a95a-c4b1de4ac854" containerID="31cd422052c78c53e4a0c7c29cc3f9e1aa12bad0cc4b6036639b40662d670412" exitCode=0 Nov 24 11:36:38 crc kubenswrapper[4678]: I1124 11:36:38.833515 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-q26st" event={"ID":"c9473500-25d5-4b49-a95a-c4b1de4ac854","Type":"ContainerDied","Data":"31cd422052c78c53e4a0c7c29cc3f9e1aa12bad0cc4b6036639b40662d670412"} Nov 24 11:36:38 crc kubenswrapper[4678]: I1124 11:36:38.835450 4678 generic.go:334] "Generic (PLEG): container finished" podID="75a467ed-5cfa-44da-9e07-7902433ef5a0" containerID="7c68d0e13125e6de9f366d7a055c01ab2c02dd4593257575bc8a1bb9a12733c7" exitCode=0 Nov 24 11:36:38 crc kubenswrapper[4678]: I1124 11:36:38.835495 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b0fb-account-create-w4x74" event={"ID":"75a467ed-5cfa-44da-9e07-7902433ef5a0","Type":"ContainerDied","Data":"7c68d0e13125e6de9f366d7a055c01ab2c02dd4593257575bc8a1bb9a12733c7"} Nov 24 11:36:38 crc kubenswrapper[4678]: I1124 11:36:38.849004 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1a7a4a62-9baa-4df8-ba83-688dc6817249","Type":"ContainerStarted","Data":"3e89219699257422bf8e67050a0e0d1d02a1dc20a9c73b386f9f58ed6961c09c"} Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.333313 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fpvbh" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.341946 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dec4-account-create-q8wzh" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.348433 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-p5q8z" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.357815 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6wt6l" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.376346 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6472-account-create-dmhpl" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.432458 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1811771b-0c1b-4767-b4e2-ec8b52d12f18-operator-scripts\") pod \"1811771b-0c1b-4767-b4e2-ec8b52d12f18\" (UID: \"1811771b-0c1b-4767-b4e2-ec8b52d12f18\") " Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.432525 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzh62\" (UniqueName: \"kubernetes.io/projected/14aebdf2-73dd-4904-a5bb-01dbe513298e-kube-api-access-pzh62\") pod \"14aebdf2-73dd-4904-a5bb-01dbe513298e\" (UID: \"14aebdf2-73dd-4904-a5bb-01dbe513298e\") " Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.432558 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44fm8\" (UniqueName: \"kubernetes.io/projected/2b637f29-368e-458f-93dd-77f478100f0b-kube-api-access-44fm8\") pod \"2b637f29-368e-458f-93dd-77f478100f0b\" (UID: \"2b637f29-368e-458f-93dd-77f478100f0b\") " Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.432634 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbt5p\" (UniqueName: \"kubernetes.io/projected/1811771b-0c1b-4767-b4e2-ec8b52d12f18-kube-api-access-fbt5p\") pod \"1811771b-0c1b-4767-b4e2-ec8b52d12f18\" (UID: \"1811771b-0c1b-4767-b4e2-ec8b52d12f18\") " Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.432797 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzzrg\" (UniqueName: \"kubernetes.io/projected/6ccdb39d-cd19-45a6-aa4d-bbee44622101-kube-api-access-jzzrg\") pod \"6ccdb39d-cd19-45a6-aa4d-bbee44622101\" (UID: \"6ccdb39d-cd19-45a6-aa4d-bbee44622101\") " Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.432929 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ccdb39d-cd19-45a6-aa4d-bbee44622101-operator-scripts\") pod \"6ccdb39d-cd19-45a6-aa4d-bbee44622101\" (UID: \"6ccdb39d-cd19-45a6-aa4d-bbee44622101\") " Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.433005 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b637f29-368e-458f-93dd-77f478100f0b-operator-scripts\") pod \"2b637f29-368e-458f-93dd-77f478100f0b\" (UID: \"2b637f29-368e-458f-93dd-77f478100f0b\") " Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.433023 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14aebdf2-73dd-4904-a5bb-01dbe513298e-operator-scripts\") pod \"14aebdf2-73dd-4904-a5bb-01dbe513298e\" (UID: \"14aebdf2-73dd-4904-a5bb-01dbe513298e\") " Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.433815 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1811771b-0c1b-4767-b4e2-ec8b52d12f18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1811771b-0c1b-4767-b4e2-ec8b52d12f18" (UID: "1811771b-0c1b-4767-b4e2-ec8b52d12f18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.433924 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b637f29-368e-458f-93dd-77f478100f0b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b637f29-368e-458f-93dd-77f478100f0b" (UID: "2b637f29-368e-458f-93dd-77f478100f0b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.433965 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14aebdf2-73dd-4904-a5bb-01dbe513298e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14aebdf2-73dd-4904-a5bb-01dbe513298e" (UID: "14aebdf2-73dd-4904-a5bb-01dbe513298e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.434471 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ccdb39d-cd19-45a6-aa4d-bbee44622101-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ccdb39d-cd19-45a6-aa4d-bbee44622101" (UID: "6ccdb39d-cd19-45a6-aa4d-bbee44622101"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.440642 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14aebdf2-73dd-4904-a5bb-01dbe513298e-kube-api-access-pzh62" (OuterVolumeSpecName: "kube-api-access-pzh62") pod "14aebdf2-73dd-4904-a5bb-01dbe513298e" (UID: "14aebdf2-73dd-4904-a5bb-01dbe513298e"). InnerVolumeSpecName "kube-api-access-pzh62". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.440797 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ccdb39d-cd19-45a6-aa4d-bbee44622101-kube-api-access-jzzrg" (OuterVolumeSpecName: "kube-api-access-jzzrg") pod "6ccdb39d-cd19-45a6-aa4d-bbee44622101" (UID: "6ccdb39d-cd19-45a6-aa4d-bbee44622101"). InnerVolumeSpecName "kube-api-access-jzzrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.441235 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1811771b-0c1b-4767-b4e2-ec8b52d12f18-kube-api-access-fbt5p" (OuterVolumeSpecName: "kube-api-access-fbt5p") pod "1811771b-0c1b-4767-b4e2-ec8b52d12f18" (UID: "1811771b-0c1b-4767-b4e2-ec8b52d12f18"). InnerVolumeSpecName "kube-api-access-fbt5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.442281 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b637f29-368e-458f-93dd-77f478100f0b-kube-api-access-44fm8" (OuterVolumeSpecName: "kube-api-access-44fm8") pod "2b637f29-368e-458f-93dd-77f478100f0b" (UID: "2b637f29-368e-458f-93dd-77f478100f0b"). InnerVolumeSpecName "kube-api-access-44fm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.534577 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb5591ea-c50b-46c1-8ed3-e2062967d0f1-operator-scripts\") pod \"cb5591ea-c50b-46c1-8ed3-e2062967d0f1\" (UID: \"cb5591ea-c50b-46c1-8ed3-e2062967d0f1\") " Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.534819 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-przxb\" (UniqueName: \"kubernetes.io/projected/cb5591ea-c50b-46c1-8ed3-e2062967d0f1-kube-api-access-przxb\") pod \"cb5591ea-c50b-46c1-8ed3-e2062967d0f1\" (UID: \"cb5591ea-c50b-46c1-8ed3-e2062967d0f1\") " Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.535117 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb5591ea-c50b-46c1-8ed3-e2062967d0f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb5591ea-c50b-46c1-8ed3-e2062967d0f1" (UID: "cb5591ea-c50b-46c1-8ed3-e2062967d0f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.535950 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ccdb39d-cd19-45a6-aa4d-bbee44622101-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.535974 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b637f29-368e-458f-93dd-77f478100f0b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.536001 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14aebdf2-73dd-4904-a5bb-01dbe513298e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.536012 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1811771b-0c1b-4767-b4e2-ec8b52d12f18-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.536021 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzh62\" (UniqueName: \"kubernetes.io/projected/14aebdf2-73dd-4904-a5bb-01dbe513298e-kube-api-access-pzh62\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.536033 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44fm8\" (UniqueName: \"kubernetes.io/projected/2b637f29-368e-458f-93dd-77f478100f0b-kube-api-access-44fm8\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.536043 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbt5p\" (UniqueName: \"kubernetes.io/projected/1811771b-0c1b-4767-b4e2-ec8b52d12f18-kube-api-access-fbt5p\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.536051 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzzrg\" (UniqueName: \"kubernetes.io/projected/6ccdb39d-cd19-45a6-aa4d-bbee44622101-kube-api-access-jzzrg\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.536060 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb5591ea-c50b-46c1-8ed3-e2062967d0f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.542384 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb5591ea-c50b-46c1-8ed3-e2062967d0f1-kube-api-access-przxb" (OuterVolumeSpecName: "kube-api-access-przxb") pod "cb5591ea-c50b-46c1-8ed3-e2062967d0f1" (UID: "cb5591ea-c50b-46c1-8ed3-e2062967d0f1"). InnerVolumeSpecName "kube-api-access-przxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.638249 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-przxb\" (UniqueName: \"kubernetes.io/projected/cb5591ea-c50b-46c1-8ed3-e2062967d0f1-kube-api-access-przxb\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.887121 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-p5q8z" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.887359 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-p5q8z" event={"ID":"1811771b-0c1b-4767-b4e2-ec8b52d12f18","Type":"ContainerDied","Data":"dca1e0fe423722a5fbcac9d3410ae6e70d4d3c1d21688fae6a626ef5c68909c4"} Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.887836 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dca1e0fe423722a5fbcac9d3410ae6e70d4d3c1d21688fae6a626ef5c68909c4" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.891414 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6472-account-create-dmhpl" event={"ID":"cb5591ea-c50b-46c1-8ed3-e2062967d0f1","Type":"ContainerDied","Data":"af6198ec161c7268d48c5ae971e8300eac88f02d364f12ad0f1781ab5a15d8db"} Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.891442 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af6198ec161c7268d48c5ae971e8300eac88f02d364f12ad0f1781ab5a15d8db" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.891443 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6472-account-create-dmhpl" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.896753 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dec4-account-create-q8wzh" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.898720 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6wt6l" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.901935 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fpvbh" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.925464 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dec4-account-create-q8wzh" event={"ID":"2b637f29-368e-458f-93dd-77f478100f0b","Type":"ContainerDied","Data":"98942b3678d29fd9f0831b9cb694862e880f2d63785ee9c700b235f47f418049"} Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.925560 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98942b3678d29fd9f0831b9cb694862e880f2d63785ee9c700b235f47f418049" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.925579 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6wt6l" event={"ID":"6ccdb39d-cd19-45a6-aa4d-bbee44622101","Type":"ContainerDied","Data":"802e34290ca8c76a143146efda4f7b29fc0deaf118f44d4817c5ca4d4b48cc7c"} Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.925603 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="802e34290ca8c76a143146efda4f7b29fc0deaf118f44d4817c5ca4d4b48cc7c" Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.925618 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fpvbh" event={"ID":"14aebdf2-73dd-4904-a5bb-01dbe513298e","Type":"ContainerDied","Data":"1f5dbaf462657ecb6561f5f5a5bafee2d4663cfdda2c971fa0c5faabfae2e94d"} Nov 24 11:36:41 crc kubenswrapper[4678]: I1124 11:36:41.925631 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f5dbaf462657ecb6561f5f5a5bafee2d4663cfdda2c971fa0c5faabfae2e94d" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.185879 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b0fb-account-create-w4x74" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.205883 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q26st" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.212213 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-750d-account-create-w4fsn" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.296364 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cfd80a7-5fb2-4a38-9a9b-839510edff06-operator-scripts\") pod \"3cfd80a7-5fb2-4a38-9a9b-839510edff06\" (UID: \"3cfd80a7-5fb2-4a38-9a9b-839510edff06\") " Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.296479 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9473500-25d5-4b49-a95a-c4b1de4ac854-operator-scripts\") pod \"c9473500-25d5-4b49-a95a-c4b1de4ac854\" (UID: \"c9473500-25d5-4b49-a95a-c4b1de4ac854\") " Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.297583 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9473500-25d5-4b49-a95a-c4b1de4ac854-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9473500-25d5-4b49-a95a-c4b1de4ac854" (UID: "c9473500-25d5-4b49-a95a-c4b1de4ac854"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.297614 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cfd80a7-5fb2-4a38-9a9b-839510edff06-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3cfd80a7-5fb2-4a38-9a9b-839510edff06" (UID: "3cfd80a7-5fb2-4a38-9a9b-839510edff06"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.297780 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvcxf\" (UniqueName: \"kubernetes.io/projected/75a467ed-5cfa-44da-9e07-7902433ef5a0-kube-api-access-mvcxf\") pod \"75a467ed-5cfa-44da-9e07-7902433ef5a0\" (UID: \"75a467ed-5cfa-44da-9e07-7902433ef5a0\") " Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.297856 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vmqc\" (UniqueName: \"kubernetes.io/projected/3cfd80a7-5fb2-4a38-9a9b-839510edff06-kube-api-access-4vmqc\") pod \"3cfd80a7-5fb2-4a38-9a9b-839510edff06\" (UID: \"3cfd80a7-5fb2-4a38-9a9b-839510edff06\") " Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.299129 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a467ed-5cfa-44da-9e07-7902433ef5a0-operator-scripts\") pod \"75a467ed-5cfa-44da-9e07-7902433ef5a0\" (UID: \"75a467ed-5cfa-44da-9e07-7902433ef5a0\") " Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.299198 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct656\" (UniqueName: \"kubernetes.io/projected/c9473500-25d5-4b49-a95a-c4b1de4ac854-kube-api-access-ct656\") pod \"c9473500-25d5-4b49-a95a-c4b1de4ac854\" (UID: \"c9473500-25d5-4b49-a95a-c4b1de4ac854\") " Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.300101 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cfd80a7-5fb2-4a38-9a9b-839510edff06-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.300120 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9473500-25d5-4b49-a95a-c4b1de4ac854-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.300186 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75a467ed-5cfa-44da-9e07-7902433ef5a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75a467ed-5cfa-44da-9e07-7902433ef5a0" (UID: "75a467ed-5cfa-44da-9e07-7902433ef5a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.305309 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a467ed-5cfa-44da-9e07-7902433ef5a0-kube-api-access-mvcxf" (OuterVolumeSpecName: "kube-api-access-mvcxf") pod "75a467ed-5cfa-44da-9e07-7902433ef5a0" (UID: "75a467ed-5cfa-44da-9e07-7902433ef5a0"). InnerVolumeSpecName "kube-api-access-mvcxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.305480 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9473500-25d5-4b49-a95a-c4b1de4ac854-kube-api-access-ct656" (OuterVolumeSpecName: "kube-api-access-ct656") pod "c9473500-25d5-4b49-a95a-c4b1de4ac854" (UID: "c9473500-25d5-4b49-a95a-c4b1de4ac854"). InnerVolumeSpecName "kube-api-access-ct656". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.307339 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cfd80a7-5fb2-4a38-9a9b-839510edff06-kube-api-access-4vmqc" (OuterVolumeSpecName: "kube-api-access-4vmqc") pod "3cfd80a7-5fb2-4a38-9a9b-839510edff06" (UID: "3cfd80a7-5fb2-4a38-9a9b-839510edff06"). InnerVolumeSpecName "kube-api-access-4vmqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.402136 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a467ed-5cfa-44da-9e07-7902433ef5a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.402189 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct656\" (UniqueName: \"kubernetes.io/projected/c9473500-25d5-4b49-a95a-c4b1de4ac854-kube-api-access-ct656\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.402205 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvcxf\" (UniqueName: \"kubernetes.io/projected/75a467ed-5cfa-44da-9e07-7902433ef5a0-kube-api-access-mvcxf\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.402217 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vmqc\" (UniqueName: \"kubernetes.io/projected/3cfd80a7-5fb2-4a38-9a9b-839510edff06-kube-api-access-4vmqc\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.937864 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-q26st" event={"ID":"c9473500-25d5-4b49-a95a-c4b1de4ac854","Type":"ContainerDied","Data":"edfcdbfa1575f069b3c0424d2357da4fa31719feed7950a2ee8ded278602de75"} Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.938171 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edfcdbfa1575f069b3c0424d2357da4fa31719feed7950a2ee8ded278602de75" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.938239 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q26st" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.946238 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-b0fb-account-create-w4x74" event={"ID":"75a467ed-5cfa-44da-9e07-7902433ef5a0","Type":"ContainerDied","Data":"dbc46155734fe15a43f50846085999892a50789af67edb635c4017f79e0edc59"} Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.946267 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbc46155734fe15a43f50846085999892a50789af67edb635c4017f79e0edc59" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.946315 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-b0fb-account-create-w4x74" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.947828 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-750d-account-create-w4fsn" event={"ID":"3cfd80a7-5fb2-4a38-9a9b-839510edff06","Type":"ContainerDied","Data":"a54ea3f4e79d7bf09374273ecd4f3156d67b0becf72706912c9c799683b22715"} Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.947856 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a54ea3f4e79d7bf09374273ecd4f3156d67b0becf72706912c9c799683b22715" Nov 24 11:36:44 crc kubenswrapper[4678]: I1124 11:36:44.947906 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-750d-account-create-w4fsn" Nov 24 11:36:53 crc kubenswrapper[4678]: I1124 11:36:53.034049 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ch9vg" event={"ID":"3c6005a5-db1b-49b6-87ce-c507e10a6d21","Type":"ContainerStarted","Data":"250d204d0d5b06b5d2a1993bf32182b03ea3b115638f42af2574d90d657371d7"} Nov 24 11:36:53 crc kubenswrapper[4678]: I1124 11:36:53.037076 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4dr4g" event={"ID":"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87","Type":"ContainerStarted","Data":"a1ed7ed49e85e68ad5e031f4bee6ea6971c2f51f5ab6c7a10a335daa26c5f2d8"} Nov 24 11:36:53 crc kubenswrapper[4678]: I1124 11:36:53.040112 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"70557cb4-7672-4047-a601-1cf7723d8c82","Type":"ContainerStarted","Data":"ea86e63fec205eea5944b439e7e5fb90b54451d0afbf34cbac9b9b2349efadef"} Nov 24 11:36:53 crc kubenswrapper[4678]: I1124 11:36:53.042411 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1a7a4a62-9baa-4df8-ba83-688dc6817249","Type":"ContainerStarted","Data":"ea78bfc0cc68c736787110dd595f8e495339194c67f41dacd652b88d5c1e25af"} Nov 24 11:36:53 crc kubenswrapper[4678]: I1124 11:36:53.042458 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1a7a4a62-9baa-4df8-ba83-688dc6817249","Type":"ContainerStarted","Data":"7b312ebf065fba700e22c1c58cb2a4a72b935a3cb81df7c1f31787e7d701e869"} Nov 24 11:36:53 crc kubenswrapper[4678]: I1124 11:36:53.064176 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-ch9vg" podStartSLOduration=3.026022733 podStartE2EDuration="22.064154737s" podCreationTimestamp="2025-11-24 11:36:31 +0000 UTC" firstStartedPulling="2025-11-24 11:36:32.78345407 +0000 UTC m=+1203.714513709" lastFinishedPulling="2025-11-24 11:36:51.821586064 +0000 UTC m=+1222.752645713" observedRunningTime="2025-11-24 11:36:53.054716457 +0000 UTC m=+1223.985776106" watchObservedRunningTime="2025-11-24 11:36:53.064154737 +0000 UTC m=+1223.995214386" Nov 24 11:36:53 crc kubenswrapper[4678]: I1124 11:36:53.085327 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4dr4g" podStartSLOduration=4.07385819 podStartE2EDuration="20.085301087s" podCreationTimestamp="2025-11-24 11:36:33 +0000 UTC" firstStartedPulling="2025-11-24 11:36:35.728548066 +0000 UTC m=+1206.659607705" lastFinishedPulling="2025-11-24 11:36:51.739990943 +0000 UTC m=+1222.671050602" observedRunningTime="2025-11-24 11:36:53.071624365 +0000 UTC m=+1224.002684014" watchObservedRunningTime="2025-11-24 11:36:53.085301087 +0000 UTC m=+1224.016360736" Nov 24 11:36:53 crc kubenswrapper[4678]: I1124 11:36:53.103476 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=4.255729507 podStartE2EDuration="19.103423727s" podCreationTimestamp="2025-11-24 11:36:34 +0000 UTC" firstStartedPulling="2025-11-24 11:36:36.814034581 +0000 UTC m=+1207.745094210" lastFinishedPulling="2025-11-24 11:36:51.661728781 +0000 UTC m=+1222.592788430" observedRunningTime="2025-11-24 11:36:53.100993232 +0000 UTC m=+1224.032052901" watchObservedRunningTime="2025-11-24 11:36:53.103423727 +0000 UTC m=+1224.034483386" Nov 24 11:36:54 crc kubenswrapper[4678]: I1124 11:36:54.056804 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1a7a4a62-9baa-4df8-ba83-688dc6817249","Type":"ContainerStarted","Data":"a284df245d825578119d9dbb52c1d376c7f3bb501656460a0b9fb844f5b9f0c0"} Nov 24 11:36:55 crc kubenswrapper[4678]: I1124 11:36:55.079472 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1a7a4a62-9baa-4df8-ba83-688dc6817249","Type":"ContainerStarted","Data":"c75fc76ff46e5f09713227a5cd6c50528b56800af902270b8c23dfd536409562"} Nov 24 11:36:55 crc kubenswrapper[4678]: I1124 11:36:55.079852 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1a7a4a62-9baa-4df8-ba83-688dc6817249","Type":"ContainerStarted","Data":"4cb850519f662f6dd560e1e34e508b1669767667452c273ba35e3c9a4dda452c"} Nov 24 11:36:55 crc kubenswrapper[4678]: I1124 11:36:55.083069 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b2f0329-4af5-4426-a61e-2b3b1deff8a7","Type":"ContainerStarted","Data":"8630d3eda40cfd5707e27d2e77028edacbed6b343935687264da0d3e2e25b6a0"} Nov 24 11:36:56 crc kubenswrapper[4678]: I1124 11:36:56.112776 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1a7a4a62-9baa-4df8-ba83-688dc6817249","Type":"ContainerStarted","Data":"dd4e8e258b99771996065310bddeeecc4d9e1aa1e8bbef43385811e8d3b4cb40"} Nov 24 11:36:57 crc kubenswrapper[4678]: I1124 11:36:57.125747 4678 generic.go:334] "Generic (PLEG): container finished" podID="ef61d04e-97aa-4f5e-9fbd-f6abf2258b87" containerID="a1ed7ed49e85e68ad5e031f4bee6ea6971c2f51f5ab6c7a10a335daa26c5f2d8" exitCode=0 Nov 24 11:36:57 crc kubenswrapper[4678]: I1124 11:36:57.125894 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4dr4g" event={"ID":"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87","Type":"ContainerDied","Data":"a1ed7ed49e85e68ad5e031f4bee6ea6971c2f51f5ab6c7a10a335daa26c5f2d8"} Nov 24 11:36:57 crc kubenswrapper[4678]: I1124 11:36:57.137525 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1a7a4a62-9baa-4df8-ba83-688dc6817249","Type":"ContainerStarted","Data":"215c858265fc84d4a0ced0b67375f36fb0c642eedc6474f2d7b849a934a9ea07"} Nov 24 11:36:57 crc kubenswrapper[4678]: I1124 11:36:57.137557 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1a7a4a62-9baa-4df8-ba83-688dc6817249","Type":"ContainerStarted","Data":"a3920194df1ef61bb72ae7fbd2fd5970a77b89a852055353d2cf68173abd05db"} Nov 24 11:36:57 crc kubenswrapper[4678]: I1124 11:36:57.137568 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1a7a4a62-9baa-4df8-ba83-688dc6817249","Type":"ContainerStarted","Data":"8df8c9befa2786fa05cb4532537b589a32ee5da2673a5ac4b8783dfe9710f446"} Nov 24 11:36:57 crc kubenswrapper[4678]: I1124 11:36:57.137575 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1a7a4a62-9baa-4df8-ba83-688dc6817249","Type":"ContainerStarted","Data":"5e5fe44836f2613ccf71ee314a9a10245e75a7c9a2594e41cd84bdd5d0fcc7cb"} Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.153312 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1a7a4a62-9baa-4df8-ba83-688dc6817249","Type":"ContainerStarted","Data":"d9ffc0b1688eff127efe52592eb2b71c991303f9ba603125e1f49a8b62986474"} Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.153821 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1a7a4a62-9baa-4df8-ba83-688dc6817249","Type":"ContainerStarted","Data":"bfaeada2342dd21e2fab1f8bfa4dc3b0f764967d5af628203d1de984e3e28883"} Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.153834 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1a7a4a62-9baa-4df8-ba83-688dc6817249","Type":"ContainerStarted","Data":"bddd6bf41f9bd40bf87f67845df64a74c6c734f44bfca68f051385fbd69ebca7"} Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.198024 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.555466541 podStartE2EDuration="1m0.197998285s" podCreationTimestamp="2025-11-24 11:35:58 +0000 UTC" firstStartedPulling="2025-11-24 11:36:32.426406198 +0000 UTC m=+1203.357465837" lastFinishedPulling="2025-11-24 11:36:56.068937942 +0000 UTC m=+1226.999997581" observedRunningTime="2025-11-24 11:36:58.192970421 +0000 UTC m=+1229.124030080" watchObservedRunningTime="2025-11-24 11:36:58.197998285 +0000 UTC m=+1229.129057934" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482034 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-ttvdw"] Nov 24 11:36:58 crc kubenswrapper[4678]: E1124 11:36:58.482472 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ccdb39d-cd19-45a6-aa4d-bbee44622101" containerName="mariadb-database-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482488 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ccdb39d-cd19-45a6-aa4d-bbee44622101" containerName="mariadb-database-create" Nov 24 11:36:58 crc kubenswrapper[4678]: E1124 11:36:58.482512 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9473500-25d5-4b49-a95a-c4b1de4ac854" containerName="mariadb-database-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482519 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9473500-25d5-4b49-a95a-c4b1de4ac854" containerName="mariadb-database-create" Nov 24 11:36:58 crc kubenswrapper[4678]: E1124 11:36:58.482533 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfd80a7-5fb2-4a38-9a9b-839510edff06" containerName="mariadb-account-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482540 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfd80a7-5fb2-4a38-9a9b-839510edff06" containerName="mariadb-account-create" Nov 24 11:36:58 crc kubenswrapper[4678]: E1124 11:36:58.482548 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a467ed-5cfa-44da-9e07-7902433ef5a0" containerName="mariadb-account-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482554 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a467ed-5cfa-44da-9e07-7902433ef5a0" containerName="mariadb-account-create" Nov 24 11:36:58 crc kubenswrapper[4678]: E1124 11:36:58.482561 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5591ea-c50b-46c1-8ed3-e2062967d0f1" containerName="mariadb-account-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482566 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5591ea-c50b-46c1-8ed3-e2062967d0f1" containerName="mariadb-account-create" Nov 24 11:36:58 crc kubenswrapper[4678]: E1124 11:36:58.482582 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb16f708-35c9-421d-af98-ef172a021f0d" containerName="ovn-config" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482588 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb16f708-35c9-421d-af98-ef172a021f0d" containerName="ovn-config" Nov 24 11:36:58 crc kubenswrapper[4678]: E1124 11:36:58.482598 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b637f29-368e-458f-93dd-77f478100f0b" containerName="mariadb-account-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482604 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b637f29-368e-458f-93dd-77f478100f0b" containerName="mariadb-account-create" Nov 24 11:36:58 crc kubenswrapper[4678]: E1124 11:36:58.482619 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14aebdf2-73dd-4904-a5bb-01dbe513298e" containerName="mariadb-database-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482626 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="14aebdf2-73dd-4904-a5bb-01dbe513298e" containerName="mariadb-database-create" Nov 24 11:36:58 crc kubenswrapper[4678]: E1124 11:36:58.482633 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1811771b-0c1b-4767-b4e2-ec8b52d12f18" containerName="mariadb-database-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482639 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="1811771b-0c1b-4767-b4e2-ec8b52d12f18" containerName="mariadb-database-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482837 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9473500-25d5-4b49-a95a-c4b1de4ac854" containerName="mariadb-database-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482853 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ccdb39d-cd19-45a6-aa4d-bbee44622101" containerName="mariadb-database-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482863 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="1811771b-0c1b-4767-b4e2-ec8b52d12f18" containerName="mariadb-database-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482876 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5591ea-c50b-46c1-8ed3-e2062967d0f1" containerName="mariadb-account-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482890 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb16f708-35c9-421d-af98-ef172a021f0d" containerName="ovn-config" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482899 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b637f29-368e-458f-93dd-77f478100f0b" containerName="mariadb-account-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482908 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="14aebdf2-73dd-4904-a5bb-01dbe513298e" containerName="mariadb-database-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482921 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a467ed-5cfa-44da-9e07-7902433ef5a0" containerName="mariadb-account-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.482931 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cfd80a7-5fb2-4a38-9a9b-839510edff06" containerName="mariadb-account-create" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.484499 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.487045 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.512191 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-ttvdw"] Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.569006 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.569402 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krjpv\" (UniqueName: \"kubernetes.io/projected/4e057179-232e-4b13-b0a3-4456f123c3b6-kube-api-access-krjpv\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.569484 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.569513 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.569545 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.569573 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-config\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.586379 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4dr4g" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.671746 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95xgc\" (UniqueName: \"kubernetes.io/projected/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-kube-api-access-95xgc\") pod \"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87\" (UID: \"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87\") " Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.671906 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-config-data\") pod \"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87\" (UID: \"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87\") " Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.671984 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-combined-ca-bundle\") pod \"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87\" (UID: \"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87\") " Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.672554 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.672624 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krjpv\" (UniqueName: \"kubernetes.io/projected/4e057179-232e-4b13-b0a3-4456f123c3b6-kube-api-access-krjpv\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.672735 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.672779 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.672809 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.672837 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-config\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.673924 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.703348 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-config\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.703914 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-kube-api-access-95xgc" (OuterVolumeSpecName: "kube-api-access-95xgc") pod "ef61d04e-97aa-4f5e-9fbd-f6abf2258b87" (UID: "ef61d04e-97aa-4f5e-9fbd-f6abf2258b87"). InnerVolumeSpecName "kube-api-access-95xgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.704079 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.708870 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.709692 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.742238 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef61d04e-97aa-4f5e-9fbd-f6abf2258b87" (UID: "ef61d04e-97aa-4f5e-9fbd-f6abf2258b87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.743657 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krjpv\" (UniqueName: \"kubernetes.io/projected/4e057179-232e-4b13-b0a3-4456f123c3b6-kube-api-access-krjpv\") pod \"dnsmasq-dns-77585f5f8c-ttvdw\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.783129 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-config-data" (OuterVolumeSpecName: "config-data") pod "ef61d04e-97aa-4f5e-9fbd-f6abf2258b87" (UID: "ef61d04e-97aa-4f5e-9fbd-f6abf2258b87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.804908 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95xgc\" (UniqueName: \"kubernetes.io/projected/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-kube-api-access-95xgc\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.804942 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.804952 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:36:58 crc kubenswrapper[4678]: I1124 11:36:58.903645 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.203217 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4dr4g" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.203311 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4dr4g" event={"ID":"ef61d04e-97aa-4f5e-9fbd-f6abf2258b87","Type":"ContainerDied","Data":"42fef467f59c5193188b9c2aaf11821b06f9e982a2f28f867f7a269e7b4e79b9"} Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.203532 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42fef467f59c5193188b9c2aaf11821b06f9e982a2f28f867f7a269e7b4e79b9" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.425837 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-ttvdw"] Nov 24 11:36:59 crc kubenswrapper[4678]: W1124 11:36:59.437738 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e057179_232e_4b13_b0a3_4456f123c3b6.slice/crio-1a86b74ad840b7dcf0bb95ba9b42552d70bd374dac8b86acad2a969bccbedb6d WatchSource:0}: Error finding container 1a86b74ad840b7dcf0bb95ba9b42552d70bd374dac8b86acad2a969bccbedb6d: Status 404 returned error can't find the container with id 1a86b74ad840b7dcf0bb95ba9b42552d70bd374dac8b86acad2a969bccbedb6d Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.441135 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-ttvdw"] Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.483850 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-l7s9d"] Nov 24 11:36:59 crc kubenswrapper[4678]: E1124 11:36:59.484311 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef61d04e-97aa-4f5e-9fbd-f6abf2258b87" containerName="keystone-db-sync" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.484329 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef61d04e-97aa-4f5e-9fbd-f6abf2258b87" containerName="keystone-db-sync" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.484563 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef61d04e-97aa-4f5e-9fbd-f6abf2258b87" containerName="keystone-db-sync" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.496751 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.505139 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-l7s9d"] Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.545068 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-9wctt"] Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.547169 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.549124 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cvvbb" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.549937 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.550080 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.551290 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.551436 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.599536 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9wctt"] Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.637821 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-config\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.637901 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.637958 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-scripts\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.637996 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.638031 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt86b\" (UniqueName: \"kubernetes.io/projected/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-kube-api-access-dt86b\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.638049 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29fd6\" (UniqueName: \"kubernetes.io/projected/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-kube-api-access-29fd6\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.638065 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-combined-ca-bundle\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.638079 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-credential-keys\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.638095 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-dns-svc\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.638125 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.638141 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-config-data\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.638160 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-fernet-keys\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.696764 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-dnf2l"] Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.698179 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dnf2l" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.710431 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.710709 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-9xsmw" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.717575 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-dnf2l"] Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.741322 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-fernet-keys\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.741436 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-config\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.741489 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.741537 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-scripts\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.741562 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.741596 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt86b\" (UniqueName: \"kubernetes.io/projected/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-kube-api-access-dt86b\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.741614 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29fd6\" (UniqueName: \"kubernetes.io/projected/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-kube-api-access-29fd6\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.741635 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-combined-ca-bundle\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.741651 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-credential-keys\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.741690 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-dns-svc\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.741727 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.742432 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-config-data\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.742604 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.747410 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-config\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.760462 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.764092 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-dns-svc\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.764186 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.765084 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-config-data\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.765720 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-fernet-keys\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.777532 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-scripts\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.781340 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-combined-ca-bundle\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.785981 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-credential-keys\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.821002 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29fd6\" (UniqueName: \"kubernetes.io/projected/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-kube-api-access-29fd6\") pod \"keystone-bootstrap-9wctt\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.821691 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt86b\" (UniqueName: \"kubernetes.io/projected/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-kube-api-access-dt86b\") pod \"dnsmasq-dns-55fff446b9-l7s9d\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.842044 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qx8wj"] Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.843757 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.846424 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbb2c05-03d0-41ad-b306-0d196383c147-config-data\") pod \"heat-db-sync-dnf2l\" (UID: \"3fbb2c05-03d0-41ad-b306-0d196383c147\") " pod="openstack/heat-db-sync-dnf2l" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.846480 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbb2c05-03d0-41ad-b306-0d196383c147-combined-ca-bundle\") pod \"heat-db-sync-dnf2l\" (UID: \"3fbb2c05-03d0-41ad-b306-0d196383c147\") " pod="openstack/heat-db-sync-dnf2l" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.846629 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nklz8\" (UniqueName: \"kubernetes.io/projected/3fbb2c05-03d0-41ad-b306-0d196383c147-kube-api-access-nklz8\") pod \"heat-db-sync-dnf2l\" (UID: \"3fbb2c05-03d0-41ad-b306-0d196383c147\") " pod="openstack/heat-db-sync-dnf2l" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.846658 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-98r9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.847553 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.847791 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.850097 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qx8wj"] Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.879912 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-gwwg7"] Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.881395 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gwwg7" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.902790 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2zbf7" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.903061 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.903501 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.905006 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gwwg7"] Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.928295 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-4qbq8"] Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.929850 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4qbq8" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.932275 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.932343 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mw7nj" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.932535 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.948420 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7bf1a661-b2a3-458a-b504-2cac3277bd5d-etc-machine-id\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.948488 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nklz8\" (UniqueName: \"kubernetes.io/projected/3fbb2c05-03d0-41ad-b306-0d196383c147-kube-api-access-nklz8\") pod \"heat-db-sync-dnf2l\" (UID: \"3fbb2c05-03d0-41ad-b306-0d196383c147\") " pod="openstack/heat-db-sync-dnf2l" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.948517 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/471c5038-c8ee-4819-bb5d-93c509389555-config\") pod \"neutron-db-sync-gwwg7\" (UID: \"471c5038-c8ee-4819-bb5d-93c509389555\") " pod="openstack/neutron-db-sync-gwwg7" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.948536 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/471c5038-c8ee-4819-bb5d-93c509389555-combined-ca-bundle\") pod \"neutron-db-sync-gwwg7\" (UID: \"471c5038-c8ee-4819-bb5d-93c509389555\") " pod="openstack/neutron-db-sync-gwwg7" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.948593 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-config-data\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.948612 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqr8k\" (UniqueName: \"kubernetes.io/projected/471c5038-c8ee-4819-bb5d-93c509389555-kube-api-access-rqr8k\") pod \"neutron-db-sync-gwwg7\" (UID: \"471c5038-c8ee-4819-bb5d-93c509389555\") " pod="openstack/neutron-db-sync-gwwg7" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.948635 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-db-sync-config-data\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.948660 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbb2c05-03d0-41ad-b306-0d196383c147-config-data\") pod \"heat-db-sync-dnf2l\" (UID: \"3fbb2c05-03d0-41ad-b306-0d196383c147\") " pod="openstack/heat-db-sync-dnf2l" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.948704 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-combined-ca-bundle\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.948727 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z99c7\" (UniqueName: \"kubernetes.io/projected/7bf1a661-b2a3-458a-b504-2cac3277bd5d-kube-api-access-z99c7\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.948746 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbb2c05-03d0-41ad-b306-0d196383c147-combined-ca-bundle\") pod \"heat-db-sync-dnf2l\" (UID: \"3fbb2c05-03d0-41ad-b306-0d196383c147\") " pod="openstack/heat-db-sync-dnf2l" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.949247 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-scripts\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.952372 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbb2c05-03d0-41ad-b306-0d196383c147-config-data\") pod \"heat-db-sync-dnf2l\" (UID: \"3fbb2c05-03d0-41ad-b306-0d196383c147\") " pod="openstack/heat-db-sync-dnf2l" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.955724 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4qbq8"] Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.956583 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbb2c05-03d0-41ad-b306-0d196383c147-combined-ca-bundle\") pod \"heat-db-sync-dnf2l\" (UID: \"3fbb2c05-03d0-41ad-b306-0d196383c147\") " pod="openstack/heat-db-sync-dnf2l" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.975394 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.979111 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-l7s9d"] Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.984651 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nklz8\" (UniqueName: \"kubernetes.io/projected/3fbb2c05-03d0-41ad-b306-0d196383c147-kube-api-access-nklz8\") pod \"heat-db-sync-dnf2l\" (UID: \"3fbb2c05-03d0-41ad-b306-0d196383c147\") " pod="openstack/heat-db-sync-dnf2l" Nov 24 11:36:59 crc kubenswrapper[4678]: I1124 11:36:59.990811 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.042852 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dnf2l" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.047798 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-bcswl"] Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.049184 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bcswl" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.050835 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-combined-ca-bundle\") pod \"placement-db-sync-4qbq8\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.050886 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-scripts\") pod \"placement-db-sync-4qbq8\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.051019 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-scripts\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.051043 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bebde18-e99d-49a3-bb56-5f0de9049363-logs\") pod \"placement-db-sync-4qbq8\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.051092 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7bf1a661-b2a3-458a-b504-2cac3277bd5d-etc-machine-id\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.051120 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc45b\" (UniqueName: \"kubernetes.io/projected/4bebde18-e99d-49a3-bb56-5f0de9049363-kube-api-access-hc45b\") pod \"placement-db-sync-4qbq8\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.051177 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-config-data\") pod \"placement-db-sync-4qbq8\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.051253 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7bf1a661-b2a3-458a-b504-2cac3277bd5d-etc-machine-id\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.051327 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/471c5038-c8ee-4819-bb5d-93c509389555-config\") pod \"neutron-db-sync-gwwg7\" (UID: \"471c5038-c8ee-4819-bb5d-93c509389555\") " pod="openstack/neutron-db-sync-gwwg7" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.051353 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/471c5038-c8ee-4819-bb5d-93c509389555-combined-ca-bundle\") pod \"neutron-db-sync-gwwg7\" (UID: \"471c5038-c8ee-4819-bb5d-93c509389555\") " pod="openstack/neutron-db-sync-gwwg7" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.051478 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-config-data\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.051506 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqr8k\" (UniqueName: \"kubernetes.io/projected/471c5038-c8ee-4819-bb5d-93c509389555-kube-api-access-rqr8k\") pod \"neutron-db-sync-gwwg7\" (UID: \"471c5038-c8ee-4819-bb5d-93c509389555\") " pod="openstack/neutron-db-sync-gwwg7" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.051551 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-db-sync-config-data\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.051594 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-combined-ca-bundle\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.051634 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z99c7\" (UniqueName: \"kubernetes.io/projected/7bf1a661-b2a3-458a-b504-2cac3277bd5d-kube-api-access-z99c7\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.051981 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.055723 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-49hht" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.057098 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-config-data\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.060582 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/471c5038-c8ee-4819-bb5d-93c509389555-combined-ca-bundle\") pod \"neutron-db-sync-gwwg7\" (UID: \"471c5038-c8ee-4819-bb5d-93c509389555\") " pod="openstack/neutron-db-sync-gwwg7" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.063504 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/471c5038-c8ee-4819-bb5d-93c509389555-config\") pod \"neutron-db-sync-gwwg7\" (UID: \"471c5038-c8ee-4819-bb5d-93c509389555\") " pod="openstack/neutron-db-sync-gwwg7" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.063985 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-db-sync-config-data\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.064864 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-combined-ca-bundle\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.071743 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-scripts\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.077058 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z99c7\" (UniqueName: \"kubernetes.io/projected/7bf1a661-b2a3-458a-b504-2cac3277bd5d-kube-api-access-z99c7\") pod \"cinder-db-sync-qx8wj\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.077645 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqr8k\" (UniqueName: \"kubernetes.io/projected/471c5038-c8ee-4819-bb5d-93c509389555-kube-api-access-rqr8k\") pod \"neutron-db-sync-gwwg7\" (UID: \"471c5038-c8ee-4819-bb5d-93c509389555\") " pod="openstack/neutron-db-sync-gwwg7" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.077699 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-2wzjt"] Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.079647 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.100550 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bcswl"] Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.125613 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-2wzjt"] Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.153261 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xwk4\" (UniqueName: \"kubernetes.io/projected/82d67de7-2cd2-480b-b8f9-1c73bff16add-kube-api-access-4xwk4\") pod \"barbican-db-sync-bcswl\" (UID: \"82d67de7-2cd2-480b-b8f9-1c73bff16add\") " pod="openstack/barbican-db-sync-bcswl" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.153308 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-scripts\") pod \"placement-db-sync-4qbq8\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.153367 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d67de7-2cd2-480b-b8f9-1c73bff16add-combined-ca-bundle\") pod \"barbican-db-sync-bcswl\" (UID: \"82d67de7-2cd2-480b-b8f9-1c73bff16add\") " pod="openstack/barbican-db-sync-bcswl" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.153413 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bebde18-e99d-49a3-bb56-5f0de9049363-logs\") pod \"placement-db-sync-4qbq8\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.153454 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc45b\" (UniqueName: \"kubernetes.io/projected/4bebde18-e99d-49a3-bb56-5f0de9049363-kube-api-access-hc45b\") pod \"placement-db-sync-4qbq8\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.154419 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bebde18-e99d-49a3-bb56-5f0de9049363-logs\") pod \"placement-db-sync-4qbq8\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.154424 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-config-data\") pod \"placement-db-sync-4qbq8\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.154790 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.154856 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.159493 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-config-data\") pod \"placement-db-sync-4qbq8\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.164520 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-scripts\") pod \"placement-db-sync-4qbq8\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.164642 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-config\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.164812 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzn5m\" (UniqueName: \"kubernetes.io/projected/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-kube-api-access-vzn5m\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.164859 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82d67de7-2cd2-480b-b8f9-1c73bff16add-db-sync-config-data\") pod \"barbican-db-sync-bcswl\" (UID: \"82d67de7-2cd2-480b-b8f9-1c73bff16add\") " pod="openstack/barbican-db-sync-bcswl" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.164961 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.165101 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.165150 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-combined-ca-bundle\") pod \"placement-db-sync-4qbq8\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.169976 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc45b\" (UniqueName: \"kubernetes.io/projected/4bebde18-e99d-49a3-bb56-5f0de9049363-kube-api-access-hc45b\") pod \"placement-db-sync-4qbq8\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.173823 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-combined-ca-bundle\") pod \"placement-db-sync-4qbq8\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.173907 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.189251 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.189476 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.193094 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.193703 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.247705 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.269794 4678 generic.go:334] "Generic (PLEG): container finished" podID="4e057179-232e-4b13-b0a3-4456f123c3b6" containerID="e378016fe135e0a2e0fd2482a05b86289958e35867d76366d4c9a5bcc69860e5" exitCode=0 Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.269845 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" event={"ID":"4e057179-232e-4b13-b0a3-4456f123c3b6","Type":"ContainerDied","Data":"e378016fe135e0a2e0fd2482a05b86289958e35867d76366d4c9a5bcc69860e5"} Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.269912 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" event={"ID":"4e057179-232e-4b13-b0a3-4456f123c3b6","Type":"ContainerStarted","Data":"1a86b74ad840b7dcf0bb95ba9b42552d70bd374dac8b86acad2a969bccbedb6d"} Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.270035 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.270098 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.270213 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-config\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.270340 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82d67de7-2cd2-480b-b8f9-1c73bff16add-db-sync-config-data\") pod \"barbican-db-sync-bcswl\" (UID: \"82d67de7-2cd2-480b-b8f9-1c73bff16add\") " pod="openstack/barbican-db-sync-bcswl" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.270372 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzn5m\" (UniqueName: \"kubernetes.io/projected/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-kube-api-access-vzn5m\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.270458 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.270526 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-config-data\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.270596 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.270659 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.270748 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xwk4\" (UniqueName: \"kubernetes.io/projected/82d67de7-2cd2-480b-b8f9-1c73bff16add-kube-api-access-4xwk4\") pod \"barbican-db-sync-bcswl\" (UID: \"82d67de7-2cd2-480b-b8f9-1c73bff16add\") " pod="openstack/barbican-db-sync-bcswl" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.270880 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d67de7-2cd2-480b-b8f9-1c73bff16add-combined-ca-bundle\") pod \"barbican-db-sync-bcswl\" (UID: \"82d67de7-2cd2-480b-b8f9-1c73bff16add\") " pod="openstack/barbican-db-sync-bcswl" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.270950 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26fa8015-2aea-4aaf-baaf-bdcc15096441-run-httpd\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.271003 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9k2q\" (UniqueName: \"kubernetes.io/projected/26fa8015-2aea-4aaf-baaf-bdcc15096441-kube-api-access-b9k2q\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.271060 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-scripts\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.271125 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.271131 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.271187 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26fa8015-2aea-4aaf-baaf-bdcc15096441-log-httpd\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.271700 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.271979 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.272161 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.275791 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-config\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.281874 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d67de7-2cd2-480b-b8f9-1c73bff16add-combined-ca-bundle\") pod \"barbican-db-sync-bcswl\" (UID: \"82d67de7-2cd2-480b-b8f9-1c73bff16add\") " pod="openstack/barbican-db-sync-bcswl" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.282216 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82d67de7-2cd2-480b-b8f9-1c73bff16add-db-sync-config-data\") pod \"barbican-db-sync-bcswl\" (UID: \"82d67de7-2cd2-480b-b8f9-1c73bff16add\") " pod="openstack/barbican-db-sync-bcswl" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.303002 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xwk4\" (UniqueName: \"kubernetes.io/projected/82d67de7-2cd2-480b-b8f9-1c73bff16add-kube-api-access-4xwk4\") pod \"barbican-db-sync-bcswl\" (UID: \"82d67de7-2cd2-480b-b8f9-1c73bff16add\") " pod="openstack/barbican-db-sync-bcswl" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.304857 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gwwg7" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.338121 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzn5m\" (UniqueName: \"kubernetes.io/projected/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-kube-api-access-vzn5m\") pod \"dnsmasq-dns-76fcf4b695-2wzjt\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.309692 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.306857 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.339433 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.373951 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26fa8015-2aea-4aaf-baaf-bdcc15096441-run-httpd\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.374011 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9k2q\" (UniqueName: \"kubernetes.io/projected/26fa8015-2aea-4aaf-baaf-bdcc15096441-kube-api-access-b9k2q\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.374046 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-scripts\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.374092 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26fa8015-2aea-4aaf-baaf-bdcc15096441-log-httpd\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.374116 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.374198 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-config-data\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.374225 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.375776 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26fa8015-2aea-4aaf-baaf-bdcc15096441-run-httpd\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.382407 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26fa8015-2aea-4aaf-baaf-bdcc15096441-log-httpd\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.383013 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-scripts\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.394881 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-config-data\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.413309 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.417964 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9k2q\" (UniqueName: \"kubernetes.io/projected/26fa8015-2aea-4aaf-baaf-bdcc15096441-kube-api-access-b9k2q\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.419652 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.449191 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bcswl" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.471208 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.524864 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.885894 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-l7s9d"] Nov 24 11:37:00 crc kubenswrapper[4678]: I1124 11:37:00.892224 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:00.998291 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krjpv\" (UniqueName: \"kubernetes.io/projected/4e057179-232e-4b13-b0a3-4456f123c3b6-kube-api-access-krjpv\") pod \"4e057179-232e-4b13-b0a3-4456f123c3b6\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:00.998420 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-dns-svc\") pod \"4e057179-232e-4b13-b0a3-4456f123c3b6\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:00.998512 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-config\") pod \"4e057179-232e-4b13-b0a3-4456f123c3b6\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:00.998577 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-ovsdbserver-sb\") pod \"4e057179-232e-4b13-b0a3-4456f123c3b6\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:00.998620 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-ovsdbserver-nb\") pod \"4e057179-232e-4b13-b0a3-4456f123c3b6\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:00.998723 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-dns-swift-storage-0\") pod \"4e057179-232e-4b13-b0a3-4456f123c3b6\" (UID: \"4e057179-232e-4b13-b0a3-4456f123c3b6\") " Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.024165 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e057179-232e-4b13-b0a3-4456f123c3b6-kube-api-access-krjpv" (OuterVolumeSpecName: "kube-api-access-krjpv") pod "4e057179-232e-4b13-b0a3-4456f123c3b6" (UID: "4e057179-232e-4b13-b0a3-4456f123c3b6"). InnerVolumeSpecName "kube-api-access-krjpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.042564 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4e057179-232e-4b13-b0a3-4456f123c3b6" (UID: "4e057179-232e-4b13-b0a3-4456f123c3b6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.055479 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4e057179-232e-4b13-b0a3-4456f123c3b6" (UID: "4e057179-232e-4b13-b0a3-4456f123c3b6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.070368 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-config" (OuterVolumeSpecName: "config") pod "4e057179-232e-4b13-b0a3-4456f123c3b6" (UID: "4e057179-232e-4b13-b0a3-4456f123c3b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.078494 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4e057179-232e-4b13-b0a3-4456f123c3b6" (UID: "4e057179-232e-4b13-b0a3-4456f123c3b6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.102681 4678 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.102711 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krjpv\" (UniqueName: \"kubernetes.io/projected/4e057179-232e-4b13-b0a3-4456f123c3b6-kube-api-access-krjpv\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.102725 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.102734 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.102742 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.106730 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4e057179-232e-4b13-b0a3-4456f123c3b6" (UID: "4e057179-232e-4b13-b0a3-4456f123c3b6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.132442 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9wctt"] Nov 24 11:37:01 crc kubenswrapper[4678]: W1124 11:37:01.134742 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ce35dba_3d28_4a08_b5ec_0180fc1692c4.slice/crio-5bf5aacaf18459418f871ca63a1f43ec8c68e8a0f1f1dace847dd7d0fd97ad72 WatchSource:0}: Error finding container 5bf5aacaf18459418f871ca63a1f43ec8c68e8a0f1f1dace847dd7d0fd97ad72: Status 404 returned error can't find the container with id 5bf5aacaf18459418f871ca63a1f43ec8c68e8a0f1f1dace847dd7d0fd97ad72 Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.155272 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-dnf2l"] Nov 24 11:37:01 crc kubenswrapper[4678]: W1124 11:37:01.158559 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fbb2c05_03d0_41ad_b306_0d196383c147.slice/crio-b4aef3a567aaa555c5cefca7e5a904a971aca4106e37c0660a4c2c74a7593955 WatchSource:0}: Error finding container b4aef3a567aaa555c5cefca7e5a904a971aca4106e37c0660a4c2c74a7593955: Status 404 returned error can't find the container with id b4aef3a567aaa555c5cefca7e5a904a971aca4106e37c0660a4c2c74a7593955 Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.206203 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e057179-232e-4b13-b0a3-4456f123c3b6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.286320 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9wctt" event={"ID":"6ce35dba-3d28-4a08-b5ec-0180fc1692c4","Type":"ContainerStarted","Data":"5bf5aacaf18459418f871ca63a1f43ec8c68e8a0f1f1dace847dd7d0fd97ad72"} Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.287542 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dnf2l" event={"ID":"3fbb2c05-03d0-41ad-b306-0d196383c147","Type":"ContainerStarted","Data":"b4aef3a567aaa555c5cefca7e5a904a971aca4106e37c0660a4c2c74a7593955"} Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.289103 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" event={"ID":"4e057179-232e-4b13-b0a3-4456f123c3b6","Type":"ContainerDied","Data":"1a86b74ad840b7dcf0bb95ba9b42552d70bd374dac8b86acad2a969bccbedb6d"} Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.289140 4678 scope.go:117] "RemoveContainer" containerID="e378016fe135e0a2e0fd2482a05b86289958e35867d76366d4c9a5bcc69860e5" Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.289314 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-ttvdw" Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.296085 4678 generic.go:334] "Generic (PLEG): container finished" podID="b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4" containerID="625183c4ecaebcea8bbab996ff2b7d5eef180f19d394e620e7d859ef5fa97ff1" exitCode=0 Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.296208 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" event={"ID":"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4","Type":"ContainerDied","Data":"625183c4ecaebcea8bbab996ff2b7d5eef180f19d394e620e7d859ef5fa97ff1"} Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.296292 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" event={"ID":"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4","Type":"ContainerStarted","Data":"049588841e0fcf5fc15cf41ee1964fd3c78b8cf8a22382afc32f326f7e287ab7"} Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.468021 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-ttvdw"] Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.497955 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-ttvdw"] Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.540020 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gwwg7"] Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.546003 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4qbq8"] Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.555815 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qx8wj"] Nov 24 11:37:01 crc kubenswrapper[4678]: W1124 11:37:01.568226 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bf1a661_b2a3_458a_b504_2cac3277bd5d.slice/crio-9acbced18916141ff136778636c6c693ef603d3124cf4b1155f394e4aa53e51a WatchSource:0}: Error finding container 9acbced18916141ff136778636c6c693ef603d3124cf4b1155f394e4aa53e51a: Status 404 returned error can't find the container with id 9acbced18916141ff136778636c6c693ef603d3124cf4b1155f394e4aa53e51a Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.573727 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bcswl"] Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.921748 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e057179-232e-4b13-b0a3-4456f123c3b6" path="/var/lib/kubelet/pods/4e057179-232e-4b13-b0a3-4456f123c3b6/volumes" Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.925635 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:37:01 crc kubenswrapper[4678]: I1124 11:37:01.927341 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-2wzjt"] Nov 24 11:37:01 crc kubenswrapper[4678]: W1124 11:37:01.937254 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26fa8015_2aea_4aaf_baaf_bdcc15096441.slice/crio-02f9e6f27545e901f4b27600d7c3a1ac102724ed00a2760cf11c8dbc0b4d47a4 WatchSource:0}: Error finding container 02f9e6f27545e901f4b27600d7c3a1ac102724ed00a2760cf11c8dbc0b4d47a4: Status 404 returned error can't find the container with id 02f9e6f27545e901f4b27600d7c3a1ac102724ed00a2760cf11c8dbc0b4d47a4 Nov 24 11:37:01 crc kubenswrapper[4678]: W1124 11:37:01.956129 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39a2ab81_7e34_43bf_94ad_47a0452dbbfa.slice/crio-f5246dcff3ac200ee5e8177440c8452fc54ecea0b63a74d24e5331a8299788a7 WatchSource:0}: Error finding container f5246dcff3ac200ee5e8177440c8452fc54ecea0b63a74d24e5331a8299788a7: Status 404 returned error can't find the container with id f5246dcff3ac200ee5e8177440c8452fc54ecea0b63a74d24e5331a8299788a7 Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.084260 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.146326 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-ovsdbserver-sb\") pod \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.146692 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-config\") pod \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.146778 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-dns-swift-storage-0\") pod \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.146796 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt86b\" (UniqueName: \"kubernetes.io/projected/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-kube-api-access-dt86b\") pod \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.146817 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-dns-svc\") pod \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.146958 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-ovsdbserver-nb\") pod \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\" (UID: \"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4\") " Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.173427 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-kube-api-access-dt86b" (OuterVolumeSpecName: "kube-api-access-dt86b") pod "b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4" (UID: "b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4"). InnerVolumeSpecName "kube-api-access-dt86b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.206600 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4" (UID: "b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.252376 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4" (UID: "b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.252683 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.252724 4678 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.252739 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt86b\" (UniqueName: \"kubernetes.io/projected/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-kube-api-access-dt86b\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.328337 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4" (UID: "b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.363472 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.392610 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-config" (OuterVolumeSpecName: "config") pod "b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4" (UID: "b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.396395 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bcswl" event={"ID":"82d67de7-2cd2-480b-b8f9-1c73bff16add","Type":"ContainerStarted","Data":"e4bf7fd9675516f01796d9f35b6cdef968b4fcf52a8a25835f187b5cf8fe69c4"} Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.399864 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" event={"ID":"b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4","Type":"ContainerDied","Data":"049588841e0fcf5fc15cf41ee1964fd3c78b8cf8a22382afc32f326f7e287ab7"} Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.399915 4678 scope.go:117] "RemoveContainer" containerID="625183c4ecaebcea8bbab996ff2b7d5eef180f19d394e620e7d859ef5fa97ff1" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.400015 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-l7s9d" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.405062 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4" (UID: "b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.407872 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qx8wj" event={"ID":"7bf1a661-b2a3-458a-b504-2cac3277bd5d","Type":"ContainerStarted","Data":"9acbced18916141ff136778636c6c693ef603d3124cf4b1155f394e4aa53e51a"} Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.430811 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" event={"ID":"39a2ab81-7e34-43bf-94ad-47a0452dbbfa","Type":"ContainerStarted","Data":"f5246dcff3ac200ee5e8177440c8452fc54ecea0b63a74d24e5331a8299788a7"} Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.447933 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9wctt" event={"ID":"6ce35dba-3d28-4a08-b5ec-0180fc1692c4","Type":"ContainerStarted","Data":"9a57756f7447c44c2fcb5a2fd3cfd7f2bd3fd44b62a4fd7bf70162e48c6d1627"} Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.465064 4678 generic.go:334] "Generic (PLEG): container finished" podID="8b2f0329-4af5-4426-a61e-2b3b1deff8a7" containerID="8630d3eda40cfd5707e27d2e77028edacbed6b343935687264da0d3e2e25b6a0" exitCode=0 Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.465180 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b2f0329-4af5-4426-a61e-2b3b1deff8a7","Type":"ContainerDied","Data":"8630d3eda40cfd5707e27d2e77028edacbed6b343935687264da0d3e2e25b6a0"} Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.468889 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.468920 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.475436 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-9wctt" podStartSLOduration=3.47541903 podStartE2EDuration="3.47541903s" podCreationTimestamp="2025-11-24 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:02.475134652 +0000 UTC m=+1233.406194281" watchObservedRunningTime="2025-11-24 11:37:02.47541903 +0000 UTC m=+1233.406478669" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.503333 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26fa8015-2aea-4aaf-baaf-bdcc15096441","Type":"ContainerStarted","Data":"02f9e6f27545e901f4b27600d7c3a1ac102724ed00a2760cf11c8dbc0b4d47a4"} Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.542995 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4qbq8" event={"ID":"4bebde18-e99d-49a3-bb56-5f0de9049363","Type":"ContainerStarted","Data":"cf4dc55d46b240af07302443c7959900ac7a2adf58a9d7538d1aa8ebf4b7c6de"} Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.546604 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gwwg7" event={"ID":"471c5038-c8ee-4819-bb5d-93c509389555","Type":"ContainerStarted","Data":"3b91ff2ca751c03243723081ab6076402a78d1c9de6e70c396865e5e0b2b1d92"} Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.546637 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gwwg7" event={"ID":"471c5038-c8ee-4819-bb5d-93c509389555","Type":"ContainerStarted","Data":"8b4d86b476e5e5ef33a910572f8e6577b63abf96cc0d6cbdc5d339fe5c02d948"} Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.705961 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-gwwg7" podStartSLOduration=3.705943392 podStartE2EDuration="3.705943392s" podCreationTimestamp="2025-11-24 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:02.579195637 +0000 UTC m=+1233.510255276" watchObservedRunningTime="2025-11-24 11:37:02.705943392 +0000 UTC m=+1233.637003031" Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.791724 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.879086 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-l7s9d"] Nov 24 11:37:02 crc kubenswrapper[4678]: I1124 11:37:02.903033 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-l7s9d"] Nov 24 11:37:03 crc kubenswrapper[4678]: I1124 11:37:03.571096 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b2f0329-4af5-4426-a61e-2b3b1deff8a7","Type":"ContainerStarted","Data":"079a94e7b7f964d420806e3807d715a470d2ec1e3a456f6f8558c2a3c3f49ac0"} Nov 24 11:37:03 crc kubenswrapper[4678]: I1124 11:37:03.576454 4678 generic.go:334] "Generic (PLEG): container finished" podID="39a2ab81-7e34-43bf-94ad-47a0452dbbfa" containerID="385f1401bdc7e40c51b780ae79a32ccb42d4f08183de60cb0656300539dce972" exitCode=0 Nov 24 11:37:03 crc kubenswrapper[4678]: I1124 11:37:03.576794 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" event={"ID":"39a2ab81-7e34-43bf-94ad-47a0452dbbfa","Type":"ContainerDied","Data":"385f1401bdc7e40c51b780ae79a32ccb42d4f08183de60cb0656300539dce972"} Nov 24 11:37:03 crc kubenswrapper[4678]: I1124 11:37:03.915048 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4" path="/var/lib/kubelet/pods/b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4/volumes" Nov 24 11:37:04 crc kubenswrapper[4678]: I1124 11:37:04.604531 4678 generic.go:334] "Generic (PLEG): container finished" podID="3c6005a5-db1b-49b6-87ce-c507e10a6d21" containerID="250d204d0d5b06b5d2a1993bf32182b03ea3b115638f42af2574d90d657371d7" exitCode=0 Nov 24 11:37:04 crc kubenswrapper[4678]: I1124 11:37:04.604594 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ch9vg" event={"ID":"3c6005a5-db1b-49b6-87ce-c507e10a6d21","Type":"ContainerDied","Data":"250d204d0d5b06b5d2a1993bf32182b03ea3b115638f42af2574d90d657371d7"} Nov 24 11:37:04 crc kubenswrapper[4678]: I1124 11:37:04.613290 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" event={"ID":"39a2ab81-7e34-43bf-94ad-47a0452dbbfa","Type":"ContainerStarted","Data":"3a47c31434a8727ba90b97cefe8e96a410dee7ea9b5df00c1be488ebc00c5df5"} Nov 24 11:37:04 crc kubenswrapper[4678]: I1124 11:37:04.614240 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:04 crc kubenswrapper[4678]: I1124 11:37:04.665096 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" podStartSLOduration=5.665073798 podStartE2EDuration="5.665073798s" podCreationTimestamp="2025-11-24 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:04.646884832 +0000 UTC m=+1235.577944471" watchObservedRunningTime="2025-11-24 11:37:04.665073798 +0000 UTC m=+1235.596133517" Nov 24 11:37:06 crc kubenswrapper[4678]: I1124 11:37:06.640936 4678 generic.go:334] "Generic (PLEG): container finished" podID="6ce35dba-3d28-4a08-b5ec-0180fc1692c4" containerID="9a57756f7447c44c2fcb5a2fd3cfd7f2bd3fd44b62a4fd7bf70162e48c6d1627" exitCode=0 Nov 24 11:37:06 crc kubenswrapper[4678]: I1124 11:37:06.641490 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9wctt" event={"ID":"6ce35dba-3d28-4a08-b5ec-0180fc1692c4","Type":"ContainerDied","Data":"9a57756f7447c44c2fcb5a2fd3cfd7f2bd3fd44b62a4fd7bf70162e48c6d1627"} Nov 24 11:37:06 crc kubenswrapper[4678]: I1124 11:37:06.648124 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b2f0329-4af5-4426-a61e-2b3b1deff8a7","Type":"ContainerStarted","Data":"e766960510df66218d9d2a9c6e2b15779875ed26f0d0e4f778ef1d98e732c85a"} Nov 24 11:37:10 crc kubenswrapper[4678]: I1124 11:37:10.472830 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:10 crc kubenswrapper[4678]: I1124 11:37:10.528246 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2kwbz"] Nov 24 11:37:10 crc kubenswrapper[4678]: I1124 11:37:10.528494 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-2kwbz" podUID="8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" containerName="dnsmasq-dns" containerID="cri-o://05b090a80272c7a581a95ec04f56e7913a69e197646b82568217918fd9ece808" gracePeriod=10 Nov 24 11:37:10 crc kubenswrapper[4678]: I1124 11:37:10.703060 4678 generic.go:334] "Generic (PLEG): container finished" podID="8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" containerID="05b090a80272c7a581a95ec04f56e7913a69e197646b82568217918fd9ece808" exitCode=0 Nov 24 11:37:10 crc kubenswrapper[4678]: I1124 11:37:10.703114 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2kwbz" event={"ID":"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6","Type":"ContainerDied","Data":"05b090a80272c7a581a95ec04f56e7913a69e197646b82568217918fd9ece808"} Nov 24 11:37:12 crc kubenswrapper[4678]: I1124 11:37:12.687552 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ch9vg" Nov 24 11:37:12 crc kubenswrapper[4678]: I1124 11:37:12.731647 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ch9vg" event={"ID":"3c6005a5-db1b-49b6-87ce-c507e10a6d21","Type":"ContainerDied","Data":"ad21b0f8af6e1a79896295afb4d1023134fff0801df0de4878c3e60493370dba"} Nov 24 11:37:12 crc kubenswrapper[4678]: I1124 11:37:12.731761 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad21b0f8af6e1a79896295afb4d1023134fff0801df0de4878c3e60493370dba" Nov 24 11:37:12 crc kubenswrapper[4678]: I1124 11:37:12.731779 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ch9vg" Nov 24 11:37:12 crc kubenswrapper[4678]: I1124 11:37:12.768801 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-combined-ca-bundle\") pod \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\" (UID: \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\") " Nov 24 11:37:12 crc kubenswrapper[4678]: I1124 11:37:12.768898 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-config-data\") pod \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\" (UID: \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\") " Nov 24 11:37:12 crc kubenswrapper[4678]: I1124 11:37:12.769062 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq72w\" (UniqueName: \"kubernetes.io/projected/3c6005a5-db1b-49b6-87ce-c507e10a6d21-kube-api-access-sq72w\") pod \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\" (UID: \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\") " Nov 24 11:37:12 crc kubenswrapper[4678]: I1124 11:37:12.769105 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-db-sync-config-data\") pod \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\" (UID: \"3c6005a5-db1b-49b6-87ce-c507e10a6d21\") " Nov 24 11:37:12 crc kubenswrapper[4678]: I1124 11:37:12.780179 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6005a5-db1b-49b6-87ce-c507e10a6d21-kube-api-access-sq72w" (OuterVolumeSpecName: "kube-api-access-sq72w") pod "3c6005a5-db1b-49b6-87ce-c507e10a6d21" (UID: "3c6005a5-db1b-49b6-87ce-c507e10a6d21"). InnerVolumeSpecName "kube-api-access-sq72w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:12 crc kubenswrapper[4678]: I1124 11:37:12.791079 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3c6005a5-db1b-49b6-87ce-c507e10a6d21" (UID: "3c6005a5-db1b-49b6-87ce-c507e10a6d21"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:12 crc kubenswrapper[4678]: I1124 11:37:12.825438 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c6005a5-db1b-49b6-87ce-c507e10a6d21" (UID: "3c6005a5-db1b-49b6-87ce-c507e10a6d21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:12 crc kubenswrapper[4678]: I1124 11:37:12.842573 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-config-data" (OuterVolumeSpecName: "config-data") pod "3c6005a5-db1b-49b6-87ce-c507e10a6d21" (UID: "3c6005a5-db1b-49b6-87ce-c507e10a6d21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:12 crc kubenswrapper[4678]: I1124 11:37:12.871873 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:12 crc kubenswrapper[4678]: I1124 11:37:12.871910 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq72w\" (UniqueName: \"kubernetes.io/projected/3c6005a5-db1b-49b6-87ce-c507e10a6d21-kube-api-access-sq72w\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:12 crc kubenswrapper[4678]: I1124 11:37:12.871921 4678 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:12 crc kubenswrapper[4678]: I1124 11:37:12.871930 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6005a5-db1b-49b6-87ce-c507e10a6d21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.175372 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cpddm"] Nov 24 11:37:14 crc kubenswrapper[4678]: E1124 11:37:14.177132 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e057179-232e-4b13-b0a3-4456f123c3b6" containerName="init" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.177156 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e057179-232e-4b13-b0a3-4456f123c3b6" containerName="init" Nov 24 11:37:14 crc kubenswrapper[4678]: E1124 11:37:14.177194 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6005a5-db1b-49b6-87ce-c507e10a6d21" containerName="glance-db-sync" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.177200 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6005a5-db1b-49b6-87ce-c507e10a6d21" containerName="glance-db-sync" Nov 24 11:37:14 crc kubenswrapper[4678]: E1124 11:37:14.177208 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4" containerName="init" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.177214 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4" containerName="init" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.177436 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b6f5f7-2fb9-4ea6-8ef7-7cd2c4417eb4" containerName="init" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.177453 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e057179-232e-4b13-b0a3-4456f123c3b6" containerName="init" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.177479 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6005a5-db1b-49b6-87ce-c507e10a6d21" containerName="glance-db-sync" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.179249 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.214443 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cpddm"] Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.215517 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-config\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.215622 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.215729 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.215771 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csgkj\" (UniqueName: \"kubernetes.io/projected/9e72e3f7-0533-462d-b9d0-7df8c8de0108-kube-api-access-csgkj\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.215793 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.215838 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.318180 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.318496 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.318659 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csgkj\" (UniqueName: \"kubernetes.io/projected/9e72e3f7-0533-462d-b9d0-7df8c8de0108-kube-api-access-csgkj\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.318802 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.319046 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.319366 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-config\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.320730 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-config\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.321113 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.321877 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.321908 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.322357 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.352108 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csgkj\" (UniqueName: \"kubernetes.io/projected/9e72e3f7-0533-462d-b9d0-7df8c8de0108-kube-api-access-csgkj\") pod \"dnsmasq-dns-8b5c85b87-cpddm\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.553289 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:14 crc kubenswrapper[4678]: I1124 11:37:14.888835 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-2kwbz" podUID="8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: connect: connection refused" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.053495 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.055720 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.057909 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.058435 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.058465 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hr99s" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.094127 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.137480 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-scripts\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.137607 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.137639 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.137684 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-config-data\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.137915 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k87l\" (UniqueName: \"kubernetes.io/projected/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-kube-api-access-8k87l\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.138250 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-logs\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.138726 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.240692 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.240776 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-scripts\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.240824 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.240845 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.240866 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-config-data\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.240911 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k87l\" (UniqueName: \"kubernetes.io/projected/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-kube-api-access-8k87l\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.240934 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-logs\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.241511 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-logs\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.241792 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.242643 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.248014 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.251008 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-config-data\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.251087 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-scripts\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.260718 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k87l\" (UniqueName: \"kubernetes.io/projected/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-kube-api-access-8k87l\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.287848 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.334899 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.337103 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.340417 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.351716 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.381951 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.446977 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.447044 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c118536e-63f0-4b11-8c2c-8edfdb3700d3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.447088 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gnxd\" (UniqueName: \"kubernetes.io/projected/c118536e-63f0-4b11-8c2c-8edfdb3700d3-kube-api-access-8gnxd\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.447123 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.447153 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.447278 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.447306 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c118536e-63f0-4b11-8c2c-8edfdb3700d3-logs\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.549499 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gnxd\" (UniqueName: \"kubernetes.io/projected/c118536e-63f0-4b11-8c2c-8edfdb3700d3-kube-api-access-8gnxd\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.549552 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.549576 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.549649 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.549681 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c118536e-63f0-4b11-8c2c-8edfdb3700d3-logs\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.549781 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.549808 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c118536e-63f0-4b11-8c2c-8edfdb3700d3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.550273 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.553756 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c118536e-63f0-4b11-8c2c-8edfdb3700d3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.553909 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c118536e-63f0-4b11-8c2c-8edfdb3700d3-logs\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.557782 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.561320 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.571064 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gnxd\" (UniqueName: \"kubernetes.io/projected/c118536e-63f0-4b11-8c2c-8edfdb3700d3-kube-api-access-8gnxd\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.571279 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.601631 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: I1124 11:37:15.695838 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 11:37:15 crc kubenswrapper[4678]: E1124 11:37:15.970608 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 24 11:37:15 crc kubenswrapper[4678]: E1124 11:37:15.971077 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64fh5c4h67bh5d7h59fh586h5d8h9fhb8hc4h88h5f8h77h675hb6h58ch5d5h5d7h565h55bh5d5h9dhbch696h5f9h548hdfhfch5cdh68ch66h595q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b9k2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(26fa8015-2aea-4aaf-baaf-bdcc15096441): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:37:16 crc kubenswrapper[4678]: I1124 11:37:16.893250 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:37:16 crc kubenswrapper[4678]: I1124 11:37:16.944513 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:37:19 crc kubenswrapper[4678]: I1124 11:37:19.888739 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-2kwbz" podUID="8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: connect: connection refused" Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.478203 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.555269 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-fernet-keys\") pod \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.555368 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-scripts\") pod \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.555420 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-combined-ca-bundle\") pod \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.555479 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-credential-keys\") pod \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.555509 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-config-data\") pod \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.556380 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29fd6\" (UniqueName: \"kubernetes.io/projected/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-kube-api-access-29fd6\") pod \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\" (UID: \"6ce35dba-3d28-4a08-b5ec-0180fc1692c4\") " Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.573942 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6ce35dba-3d28-4a08-b5ec-0180fc1692c4" (UID: "6ce35dba-3d28-4a08-b5ec-0180fc1692c4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.576757 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6ce35dba-3d28-4a08-b5ec-0180fc1692c4" (UID: "6ce35dba-3d28-4a08-b5ec-0180fc1692c4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.576874 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-scripts" (OuterVolumeSpecName: "scripts") pod "6ce35dba-3d28-4a08-b5ec-0180fc1692c4" (UID: "6ce35dba-3d28-4a08-b5ec-0180fc1692c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.578225 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-kube-api-access-29fd6" (OuterVolumeSpecName: "kube-api-access-29fd6") pod "6ce35dba-3d28-4a08-b5ec-0180fc1692c4" (UID: "6ce35dba-3d28-4a08-b5ec-0180fc1692c4"). InnerVolumeSpecName "kube-api-access-29fd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.592582 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ce35dba-3d28-4a08-b5ec-0180fc1692c4" (UID: "6ce35dba-3d28-4a08-b5ec-0180fc1692c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.604783 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-config-data" (OuterVolumeSpecName: "config-data") pod "6ce35dba-3d28-4a08-b5ec-0180fc1692c4" (UID: "6ce35dba-3d28-4a08-b5ec-0180fc1692c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.658211 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.658243 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.658257 4678 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.658268 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.658278 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29fd6\" (UniqueName: \"kubernetes.io/projected/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-kube-api-access-29fd6\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.658289 4678 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ce35dba-3d28-4a08-b5ec-0180fc1692c4-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.821067 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9wctt" event={"ID":"6ce35dba-3d28-4a08-b5ec-0180fc1692c4","Type":"ContainerDied","Data":"5bf5aacaf18459418f871ca63a1f43ec8c68e8a0f1f1dace847dd7d0fd97ad72"} Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.821125 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bf5aacaf18459418f871ca63a1f43ec8c68e8a0f1f1dace847dd7d0fd97ad72" Nov 24 11:37:20 crc kubenswrapper[4678]: I1124 11:37:20.821220 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9wctt" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.570746 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-9wctt"] Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.580237 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-9wctt"] Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.673813 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-x5lx5"] Nov 24 11:37:21 crc kubenswrapper[4678]: E1124 11:37:21.674376 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce35dba-3d28-4a08-b5ec-0180fc1692c4" containerName="keystone-bootstrap" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.674390 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce35dba-3d28-4a08-b5ec-0180fc1692c4" containerName="keystone-bootstrap" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.674643 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ce35dba-3d28-4a08-b5ec-0180fc1692c4" containerName="keystone-bootstrap" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.675504 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.678534 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.679568 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.679738 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.679935 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cvvbb" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.681587 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.688373 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x5lx5"] Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.789559 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-config-data\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.789924 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-fernet-keys\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.790404 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-credential-keys\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.790542 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkxvg\" (UniqueName: \"kubernetes.io/projected/195eda15-ecc1-4041-b42e-ffe751e686af-kube-api-access-tkxvg\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.790802 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-scripts\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.790834 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-combined-ca-bundle\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.893516 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-credential-keys\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.893571 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkxvg\" (UniqueName: \"kubernetes.io/projected/195eda15-ecc1-4041-b42e-ffe751e686af-kube-api-access-tkxvg\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.893613 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-scripts\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.893628 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-combined-ca-bundle\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.893706 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-config-data\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.893765 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-fernet-keys\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.901308 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-combined-ca-bundle\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.901604 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-fernet-keys\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.902438 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-credential-keys\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.902501 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-config-data\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.903993 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-scripts\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.913187 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ce35dba-3d28-4a08-b5ec-0180fc1692c4" path="/var/lib/kubelet/pods/6ce35dba-3d28-4a08-b5ec-0180fc1692c4/volumes" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.922431 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkxvg\" (UniqueName: \"kubernetes.io/projected/195eda15-ecc1-4041-b42e-ffe751e686af-kube-api-access-tkxvg\") pod \"keystone-bootstrap-x5lx5\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:21 crc kubenswrapper[4678]: I1124 11:37:21.993942 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:25 crc kubenswrapper[4678]: I1124 11:37:25.893109 4678 generic.go:334] "Generic (PLEG): container finished" podID="471c5038-c8ee-4819-bb5d-93c509389555" containerID="3b91ff2ca751c03243723081ab6076402a78d1c9de6e70c396865e5e0b2b1d92" exitCode=0 Nov 24 11:37:25 crc kubenswrapper[4678]: I1124 11:37:25.893211 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gwwg7" event={"ID":"471c5038-c8ee-4819-bb5d-93c509389555","Type":"ContainerDied","Data":"3b91ff2ca751c03243723081ab6076402a78d1c9de6e70c396865e5e0b2b1d92"} Nov 24 11:37:28 crc kubenswrapper[4678]: E1124 11:37:28.197777 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Nov 24 11:37:28 crc kubenswrapper[4678]: E1124 11:37:28.198422 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nklz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-dnf2l_openstack(3fbb2c05-03d0-41ad-b306-0d196383c147): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:37:28 crc kubenswrapper[4678]: E1124 11:37:28.200648 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-dnf2l" podUID="3fbb2c05-03d0-41ad-b306-0d196383c147" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.306563 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.315177 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gwwg7" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.353834 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqr8k\" (UniqueName: \"kubernetes.io/projected/471c5038-c8ee-4819-bb5d-93c509389555-kube-api-access-rqr8k\") pod \"471c5038-c8ee-4819-bb5d-93c509389555\" (UID: \"471c5038-c8ee-4819-bb5d-93c509389555\") " Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.353982 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/471c5038-c8ee-4819-bb5d-93c509389555-config\") pod \"471c5038-c8ee-4819-bb5d-93c509389555\" (UID: \"471c5038-c8ee-4819-bb5d-93c509389555\") " Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.354106 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjdbp\" (UniqueName: \"kubernetes.io/projected/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-kube-api-access-gjdbp\") pod \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.354153 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-dns-svc\") pod \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.354177 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/471c5038-c8ee-4819-bb5d-93c509389555-combined-ca-bundle\") pod \"471c5038-c8ee-4819-bb5d-93c509389555\" (UID: \"471c5038-c8ee-4819-bb5d-93c509389555\") " Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.354229 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-config\") pod \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.354265 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-ovsdbserver-sb\") pod \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.354288 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-ovsdbserver-nb\") pod \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\" (UID: \"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6\") " Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.369950 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-kube-api-access-gjdbp" (OuterVolumeSpecName: "kube-api-access-gjdbp") pod "8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" (UID: "8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6"). InnerVolumeSpecName "kube-api-access-gjdbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.381949 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/471c5038-c8ee-4819-bb5d-93c509389555-kube-api-access-rqr8k" (OuterVolumeSpecName: "kube-api-access-rqr8k") pod "471c5038-c8ee-4819-bb5d-93c509389555" (UID: "471c5038-c8ee-4819-bb5d-93c509389555"). InnerVolumeSpecName "kube-api-access-rqr8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.405561 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/471c5038-c8ee-4819-bb5d-93c509389555-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "471c5038-c8ee-4819-bb5d-93c509389555" (UID: "471c5038-c8ee-4819-bb5d-93c509389555"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.447355 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/471c5038-c8ee-4819-bb5d-93c509389555-config" (OuterVolumeSpecName: "config") pod "471c5038-c8ee-4819-bb5d-93c509389555" (UID: "471c5038-c8ee-4819-bb5d-93c509389555"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.457194 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/471c5038-c8ee-4819-bb5d-93c509389555-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.457235 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqr8k\" (UniqueName: \"kubernetes.io/projected/471c5038-c8ee-4819-bb5d-93c509389555-kube-api-access-rqr8k\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.457248 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/471c5038-c8ee-4819-bb5d-93c509389555-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.457258 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjdbp\" (UniqueName: \"kubernetes.io/projected/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-kube-api-access-gjdbp\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.473828 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" (UID: "8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.479646 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" (UID: "8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.485056 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-config" (OuterVolumeSpecName: "config") pod "8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" (UID: "8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.497277 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" (UID: "8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.559422 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.559456 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.559468 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.559484 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.930070 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gwwg7" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.930537 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gwwg7" event={"ID":"471c5038-c8ee-4819-bb5d-93c509389555","Type":"ContainerDied","Data":"8b4d86b476e5e5ef33a910572f8e6577b63abf96cc0d6cbdc5d339fe5c02d948"} Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.930584 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b4d86b476e5e5ef33a910572f8e6577b63abf96cc0d6cbdc5d339fe5c02d948" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.937326 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-2kwbz" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.937791 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2kwbz" event={"ID":"8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6","Type":"ContainerDied","Data":"ad5ee14efd5a9876c0961cc664cb0c32d1d66598ce8c191d90a4beb8572b2e9f"} Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.937830 4678 scope.go:117] "RemoveContainer" containerID="05b090a80272c7a581a95ec04f56e7913a69e197646b82568217918fd9ece808" Nov 24 11:37:28 crc kubenswrapper[4678]: E1124 11:37:28.938840 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-dnf2l" podUID="3fbb2c05-03d0-41ad-b306-0d196383c147" Nov 24 11:37:28 crc kubenswrapper[4678]: I1124 11:37:28.995965 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2kwbz"] Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.005278 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2kwbz"] Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.590166 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cpddm"] Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.607722 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-hzwmf"] Nov 24 11:37:29 crc kubenswrapper[4678]: E1124 11:37:29.608203 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="471c5038-c8ee-4819-bb5d-93c509389555" containerName="neutron-db-sync" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.608224 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="471c5038-c8ee-4819-bb5d-93c509389555" containerName="neutron-db-sync" Nov 24 11:37:29 crc kubenswrapper[4678]: E1124 11:37:29.608246 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" containerName="dnsmasq-dns" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.608253 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" containerName="dnsmasq-dns" Nov 24 11:37:29 crc kubenswrapper[4678]: E1124 11:37:29.608284 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" containerName="init" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.608290 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" containerName="init" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.608475 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="471c5038-c8ee-4819-bb5d-93c509389555" containerName="neutron-db-sync" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.608496 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" containerName="dnsmasq-dns" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.609688 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.631119 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-85857bf94-wpbc7"] Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.644622 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.649620 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.649996 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2zbf7" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.650155 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.650919 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.654519 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-hzwmf"] Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.668094 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85857bf94-wpbc7"] Nov 24 11:37:29 crc kubenswrapper[4678]: E1124 11:37:29.704614 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 24 11:37:29 crc kubenswrapper[4678]: E1124 11:37:29.704791 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z99c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qx8wj_openstack(7bf1a661-b2a3-458a-b504-2cac3277bd5d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:37:29 crc kubenswrapper[4678]: E1124 11:37:29.706956 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qx8wj" podUID="7bf1a661-b2a3-458a-b504-2cac3277bd5d" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.801082 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-config\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.801490 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.801530 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-combined-ca-bundle\") pod \"neutron-85857bf94-wpbc7\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.801573 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zxwc\" (UniqueName: \"kubernetes.io/projected/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-kube-api-access-4zxwc\") pod \"neutron-85857bf94-wpbc7\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.804179 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.804232 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.804291 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-config\") pod \"neutron-85857bf94-wpbc7\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.804339 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-ovndb-tls-certs\") pod \"neutron-85857bf94-wpbc7\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.804366 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.804429 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wtds\" (UniqueName: \"kubernetes.io/projected/e132b2d4-c6a9-4283-84aa-11a1214092e6-kube-api-access-2wtds\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.804468 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-httpd-config\") pod \"neutron-85857bf94-wpbc7\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.889742 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-2kwbz" podUID="8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: i/o timeout" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.907010 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wtds\" (UniqueName: \"kubernetes.io/projected/e132b2d4-c6a9-4283-84aa-11a1214092e6-kube-api-access-2wtds\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.907065 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-httpd-config\") pod \"neutron-85857bf94-wpbc7\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.907203 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-config\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.907303 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.907369 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-combined-ca-bundle\") pod \"neutron-85857bf94-wpbc7\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.908178 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zxwc\" (UniqueName: \"kubernetes.io/projected/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-kube-api-access-4zxwc\") pod \"neutron-85857bf94-wpbc7\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.908295 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.908326 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.908388 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-config\") pod \"neutron-85857bf94-wpbc7\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.908403 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-config\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.908426 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-ovndb-tls-certs\") pod \"neutron-85857bf94-wpbc7\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.908486 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.908964 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.909957 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.911070 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.913326 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.913420 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.913910 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.916892 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.917120 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6" path="/var/lib/kubelet/pods/8b3d9ba1-5241-4e47-8139-55b1dd4e4bb6/volumes" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.923917 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-httpd-config\") pod \"neutron-85857bf94-wpbc7\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.924111 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-combined-ca-bundle\") pod \"neutron-85857bf94-wpbc7\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.926423 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wtds\" (UniqueName: \"kubernetes.io/projected/e132b2d4-c6a9-4283-84aa-11a1214092e6-kube-api-access-2wtds\") pod \"dnsmasq-dns-84b966f6c9-hzwmf\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.927059 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-ovndb-tls-certs\") pod \"neutron-85857bf94-wpbc7\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.930728 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zxwc\" (UniqueName: \"kubernetes.io/projected/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-kube-api-access-4zxwc\") pod \"neutron-85857bf94-wpbc7\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:29 crc kubenswrapper[4678]: I1124 11:37:29.941253 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-config\") pod \"neutron-85857bf94-wpbc7\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:29 crc kubenswrapper[4678]: E1124 11:37:29.951293 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-qx8wj" podUID="7bf1a661-b2a3-458a-b504-2cac3277bd5d" Nov 24 11:37:30 crc kubenswrapper[4678]: I1124 11:37:30.068792 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:30 crc kubenswrapper[4678]: I1124 11:37:30.082562 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2zbf7" Nov 24 11:37:30 crc kubenswrapper[4678]: I1124 11:37:30.091148 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:30 crc kubenswrapper[4678]: I1124 11:37:30.203193 4678 scope.go:117] "RemoveContainer" containerID="475073f8bc92d3951aab31de77fb078ec053140578be6a4a92c6582beac1e810" Nov 24 11:37:30 crc kubenswrapper[4678]: I1124 11:37:30.297581 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:37:30 crc kubenswrapper[4678]: I1124 11:37:30.297875 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:37:30 crc kubenswrapper[4678]: I1124 11:37:30.740487 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cpddm"] Nov 24 11:37:30 crc kubenswrapper[4678]: W1124 11:37:30.766312 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e72e3f7_0533_462d_b9d0_7df8c8de0108.slice/crio-0cf474a96af20dad4eab6c8812af2ee89f3ab7bcc2795ef702845259117af757 WatchSource:0}: Error finding container 0cf474a96af20dad4eab6c8812af2ee89f3ab7bcc2795ef702845259117af757: Status 404 returned error can't find the container with id 0cf474a96af20dad4eab6c8812af2ee89f3ab7bcc2795ef702845259117af757 Nov 24 11:37:30 crc kubenswrapper[4678]: I1124 11:37:30.976641 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b2f0329-4af5-4426-a61e-2b3b1deff8a7","Type":"ContainerStarted","Data":"0d824cf68a1b21b01713f94bea3bb0f4899312b99860b62f7e81bcca098dc813"} Nov 24 11:37:30 crc kubenswrapper[4678]: I1124 11:37:30.981184 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26fa8015-2aea-4aaf-baaf-bdcc15096441","Type":"ContainerStarted","Data":"850fd3ff4a08b1d1eb6ba195707aa3ac607e29928bb1429d739b9ab53df4288f"} Nov 24 11:37:30 crc kubenswrapper[4678]: I1124 11:37:30.982151 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" event={"ID":"9e72e3f7-0533-462d-b9d0-7df8c8de0108","Type":"ContainerStarted","Data":"0cf474a96af20dad4eab6c8812af2ee89f3ab7bcc2795ef702845259117af757"} Nov 24 11:37:30 crc kubenswrapper[4678]: I1124 11:37:30.983132 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bcswl" event={"ID":"82d67de7-2cd2-480b-b8f9-1c73bff16add","Type":"ContainerStarted","Data":"7fed17068414762afc89bebe3b204fd97ca53935dd335d0eb07056a90449e648"} Nov 24 11:37:30 crc kubenswrapper[4678]: I1124 11:37:30.991701 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4qbq8" event={"ID":"4bebde18-e99d-49a3-bb56-5f0de9049363","Type":"ContainerStarted","Data":"84292b8ff95df849599cdd6b81c24ffb6a598d8bd67407b695d8c64170cb7699"} Nov 24 11:37:31 crc kubenswrapper[4678]: I1124 11:37:31.039261 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=55.039234256 podStartE2EDuration="55.039234256s" podCreationTimestamp="2025-11-24 11:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:31.000515811 +0000 UTC m=+1261.931575470" watchObservedRunningTime="2025-11-24 11:37:31.039234256 +0000 UTC m=+1261.970293905" Nov 24 11:37:31 crc kubenswrapper[4678]: I1124 11:37:31.057027 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-bcswl" podStartSLOduration=4.038332726 podStartE2EDuration="32.057004681s" podCreationTimestamp="2025-11-24 11:36:59 +0000 UTC" firstStartedPulling="2025-11-24 11:37:01.621078353 +0000 UTC m=+1232.552137992" lastFinishedPulling="2025-11-24 11:37:29.639750308 +0000 UTC m=+1260.570809947" observedRunningTime="2025-11-24 11:37:31.026587058 +0000 UTC m=+1261.957646697" watchObservedRunningTime="2025-11-24 11:37:31.057004681 +0000 UTC m=+1261.988064320" Nov 24 11:37:31 crc kubenswrapper[4678]: I1124 11:37:31.104099 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-4qbq8" podStartSLOduration=5.468675458 podStartE2EDuration="32.104077039s" podCreationTimestamp="2025-11-24 11:36:59 +0000 UTC" firstStartedPulling="2025-11-24 11:37:01.58132762 +0000 UTC m=+1232.512387259" lastFinishedPulling="2025-11-24 11:37:28.216729201 +0000 UTC m=+1259.147788840" observedRunningTime="2025-11-24 11:37:31.045065232 +0000 UTC m=+1261.976124871" watchObservedRunningTime="2025-11-24 11:37:31.104077039 +0000 UTC m=+1262.035136678" Nov 24 11:37:31 crc kubenswrapper[4678]: I1124 11:37:31.104403 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:37:31 crc kubenswrapper[4678]: I1124 11:37:31.293786 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x5lx5"] Nov 24 11:37:31 crc kubenswrapper[4678]: I1124 11:37:31.332976 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 11:37:31 crc kubenswrapper[4678]: I1124 11:37:31.385917 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-hzwmf"] Nov 24 11:37:31 crc kubenswrapper[4678]: I1124 11:37:31.486290 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:37:31 crc kubenswrapper[4678]: I1124 11:37:31.524701 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85857bf94-wpbc7"] Nov 24 11:37:31 crc kubenswrapper[4678]: I1124 11:37:31.933777 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-697d9cc569-8n57v"] Nov 24 11:37:31 crc kubenswrapper[4678]: I1124 11:37:31.935641 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:31 crc kubenswrapper[4678]: I1124 11:37:31.940130 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 24 11:37:31 crc kubenswrapper[4678]: I1124 11:37:31.940425 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 24 11:37:31 crc kubenswrapper[4678]: I1124 11:37:31.958302 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 24 11:37:31 crc kubenswrapper[4678]: I1124 11:37:31.999597 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-697d9cc569-8n57v"] Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.019285 4678 generic.go:334] "Generic (PLEG): container finished" podID="9e72e3f7-0533-462d-b9d0-7df8c8de0108" containerID="948dea52618ae58a06d83eda591777f9c24308902bf8a669040c20f6ba7e455f" exitCode=0 Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.019535 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" event={"ID":"9e72e3f7-0533-462d-b9d0-7df8c8de0108","Type":"ContainerDied","Data":"948dea52618ae58a06d83eda591777f9c24308902bf8a669040c20f6ba7e455f"} Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.030634 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85857bf94-wpbc7" event={"ID":"b249aa27-98b1-40ce-85ab-5b7d0a8edf15","Type":"ContainerStarted","Data":"dbee23641c5101139417af74bdd9e03ee19dd032b70cb424fa5b4bbcfd02a0d6"} Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.040911 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0","Type":"ContainerStarted","Data":"9c707a078e1867b23889451e734c588f1f5b2e0f6ec741cce7b1bc9e2c7359ee"} Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.049229 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c118536e-63f0-4b11-8c2c-8edfdb3700d3","Type":"ContainerStarted","Data":"905cef1e8753f13dce74c19c1412c0fc314c4e529ed0d237987d8454116c6b80"} Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.052278 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x5lx5" event={"ID":"195eda15-ecc1-4041-b42e-ffe751e686af","Type":"ContainerStarted","Data":"4e5be1505b4a5b88729d6bfb00dd94637f7c810926dd573175a7c973ca097102"} Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.076897 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" event={"ID":"e132b2d4-c6a9-4283-84aa-11a1214092e6","Type":"ContainerStarted","Data":"c13764ffd4660039cb87493f1a45e93375f9777db615259c648a70c5bf48e6b7"} Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.088144 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-config\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.088235 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-ovndb-tls-certs\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.088276 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m78kg\" (UniqueName: \"kubernetes.io/projected/76238d6c-0c33-441f-8da3-1b4d23b519d8-kube-api-access-m78kg\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.088384 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-public-tls-certs\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.088410 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-internal-tls-certs\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.088443 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-combined-ca-bundle\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.088478 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-httpd-config\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.190130 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-public-tls-certs\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.190168 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-internal-tls-certs\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.190194 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-combined-ca-bundle\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.190252 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-httpd-config\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.190290 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-config\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.190407 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-ovndb-tls-certs\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.190433 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m78kg\" (UniqueName: \"kubernetes.io/projected/76238d6c-0c33-441f-8da3-1b4d23b519d8-kube-api-access-m78kg\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.201655 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-httpd-config\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.202495 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-combined-ca-bundle\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.204456 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-public-tls-certs\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.206631 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-config\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.214956 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m78kg\" (UniqueName: \"kubernetes.io/projected/76238d6c-0c33-441f-8da3-1b4d23b519d8-kube-api-access-m78kg\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.215042 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-ovndb-tls-certs\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.226392 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76238d6c-0c33-441f-8da3-1b4d23b519d8-internal-tls-certs\") pod \"neutron-697d9cc569-8n57v\" (UID: \"76238d6c-0c33-441f-8da3-1b4d23b519d8\") " pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.303197 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.671962 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.817784 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-config\") pod \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.817860 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-dns-swift-storage-0\") pod \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.817931 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-ovsdbserver-sb\") pod \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.818051 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csgkj\" (UniqueName: \"kubernetes.io/projected/9e72e3f7-0533-462d-b9d0-7df8c8de0108-kube-api-access-csgkj\") pod \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.818116 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-ovsdbserver-nb\") pod \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.818134 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-dns-svc\") pod \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\" (UID: \"9e72e3f7-0533-462d-b9d0-7df8c8de0108\") " Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.828150 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e72e3f7-0533-462d-b9d0-7df8c8de0108-kube-api-access-csgkj" (OuterVolumeSpecName: "kube-api-access-csgkj") pod "9e72e3f7-0533-462d-b9d0-7df8c8de0108" (UID: "9e72e3f7-0533-462d-b9d0-7df8c8de0108"). InnerVolumeSpecName "kube-api-access-csgkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.854804 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9e72e3f7-0533-462d-b9d0-7df8c8de0108" (UID: "9e72e3f7-0533-462d-b9d0-7df8c8de0108"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.860632 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9e72e3f7-0533-462d-b9d0-7df8c8de0108" (UID: "9e72e3f7-0533-462d-b9d0-7df8c8de0108"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.866064 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9e72e3f7-0533-462d-b9d0-7df8c8de0108" (UID: "9e72e3f7-0533-462d-b9d0-7df8c8de0108"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.873642 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-config" (OuterVolumeSpecName: "config") pod "9e72e3f7-0533-462d-b9d0-7df8c8de0108" (UID: "9e72e3f7-0533-462d-b9d0-7df8c8de0108"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.874905 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9e72e3f7-0533-462d-b9d0-7df8c8de0108" (UID: "9e72e3f7-0533-462d-b9d0-7df8c8de0108"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.923129 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.923161 4678 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.923173 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.923183 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csgkj\" (UniqueName: \"kubernetes.io/projected/9e72e3f7-0533-462d-b9d0-7df8c8de0108-kube-api-access-csgkj\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.923192 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:32 crc kubenswrapper[4678]: I1124 11:37:32.923201 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e72e3f7-0533-462d-b9d0-7df8c8de0108-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:33 crc kubenswrapper[4678]: I1124 11:37:33.079862 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-697d9cc569-8n57v"] Nov 24 11:37:33 crc kubenswrapper[4678]: W1124 11:37:33.082506 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76238d6c_0c33_441f_8da3_1b4d23b519d8.slice/crio-1648508373fe833132e4d264bbdccdf2813ac91565ba41aab46b3a9452fdcc73 WatchSource:0}: Error finding container 1648508373fe833132e4d264bbdccdf2813ac91565ba41aab46b3a9452fdcc73: Status 404 returned error can't find the container with id 1648508373fe833132e4d264bbdccdf2813ac91565ba41aab46b3a9452fdcc73 Nov 24 11:37:33 crc kubenswrapper[4678]: I1124 11:37:33.127911 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x5lx5" event={"ID":"195eda15-ecc1-4041-b42e-ffe751e686af","Type":"ContainerStarted","Data":"43dee8dd2a553aeca802b33092d914631dc3a26f4437d9fd32976b28a51fd95b"} Nov 24 11:37:33 crc kubenswrapper[4678]: I1124 11:37:33.145520 4678 generic.go:334] "Generic (PLEG): container finished" podID="e132b2d4-c6a9-4283-84aa-11a1214092e6" containerID="0f225c2a8aef5b34c6dc016f4c7de590a7612431a21f2a480e9c4908d21e645e" exitCode=0 Nov 24 11:37:33 crc kubenswrapper[4678]: I1124 11:37:33.145576 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" event={"ID":"e132b2d4-c6a9-4283-84aa-11a1214092e6","Type":"ContainerDied","Data":"0f225c2a8aef5b34c6dc016f4c7de590a7612431a21f2a480e9c4908d21e645e"} Nov 24 11:37:33 crc kubenswrapper[4678]: I1124 11:37:33.149216 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" event={"ID":"9e72e3f7-0533-462d-b9d0-7df8c8de0108","Type":"ContainerDied","Data":"0cf474a96af20dad4eab6c8812af2ee89f3ab7bcc2795ef702845259117af757"} Nov 24 11:37:33 crc kubenswrapper[4678]: I1124 11:37:33.149260 4678 scope.go:117] "RemoveContainer" containerID="948dea52618ae58a06d83eda591777f9c24308902bf8a669040c20f6ba7e455f" Nov 24 11:37:33 crc kubenswrapper[4678]: I1124 11:37:33.149358 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-cpddm" Nov 24 11:37:33 crc kubenswrapper[4678]: I1124 11:37:33.163306 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-x5lx5" podStartSLOduration=12.163288137 podStartE2EDuration="12.163288137s" podCreationTimestamp="2025-11-24 11:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:33.156129175 +0000 UTC m=+1264.087188814" watchObservedRunningTime="2025-11-24 11:37:33.163288137 +0000 UTC m=+1264.094347776" Nov 24 11:37:33 crc kubenswrapper[4678]: I1124 11:37:33.163733 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85857bf94-wpbc7" event={"ID":"b249aa27-98b1-40ce-85ab-5b7d0a8edf15","Type":"ContainerStarted","Data":"c67ff5f4392859d892fba844dbd76aea0671eb358cdb2961e81fad8ab5e1364e"} Nov 24 11:37:33 crc kubenswrapper[4678]: I1124 11:37:33.164692 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:37:33 crc kubenswrapper[4678]: I1124 11:37:33.176124 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0","Type":"ContainerStarted","Data":"1877f133cc31f97e1d72ff0e79e782ee259aa4312118c5281383cbab85b489d0"} Nov 24 11:37:33 crc kubenswrapper[4678]: I1124 11:37:33.180005 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c118536e-63f0-4b11-8c2c-8edfdb3700d3","Type":"ContainerStarted","Data":"651603449c2542edac340c9d1856b9f2cab94005b23eda02675cef899f906bb8"} Nov 24 11:37:33 crc kubenswrapper[4678]: I1124 11:37:33.214923 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-85857bf94-wpbc7" podStartSLOduration=4.214893316 podStartE2EDuration="4.214893316s" podCreationTimestamp="2025-11-24 11:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:33.212971805 +0000 UTC m=+1264.144031444" watchObservedRunningTime="2025-11-24 11:37:33.214893316 +0000 UTC m=+1264.145952955" Nov 24 11:37:33 crc kubenswrapper[4678]: I1124 11:37:33.586774 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cpddm"] Nov 24 11:37:33 crc kubenswrapper[4678]: I1124 11:37:33.606934 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-cpddm"] Nov 24 11:37:33 crc kubenswrapper[4678]: I1124 11:37:33.912314 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e72e3f7-0533-462d-b9d0-7df8c8de0108" path="/var/lib/kubelet/pods/9e72e3f7-0533-462d-b9d0-7df8c8de0108/volumes" Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.198401 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-697d9cc569-8n57v" event={"ID":"76238d6c-0c33-441f-8da3-1b4d23b519d8","Type":"ContainerStarted","Data":"644f45acff18fa772d913792bf0d193ef7e6a06122f22512e1ae2e3088731fb4"} Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.198445 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-697d9cc569-8n57v" event={"ID":"76238d6c-0c33-441f-8da3-1b4d23b519d8","Type":"ContainerStarted","Data":"935a30e73b13b1c57d96ff1c61bc2daf5bb0c73760ad2fee23723dcecd1c87ef"} Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.198455 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-697d9cc569-8n57v" event={"ID":"76238d6c-0c33-441f-8da3-1b4d23b519d8","Type":"ContainerStarted","Data":"1648508373fe833132e4d264bbdccdf2813ac91565ba41aab46b3a9452fdcc73"} Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.199704 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.202713 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" event={"ID":"e132b2d4-c6a9-4283-84aa-11a1214092e6","Type":"ContainerStarted","Data":"282b111a1eda2607770fb9a604e083b3f14169dffab117b8f1e8aa0a3867092b"} Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.203234 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.210689 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85857bf94-wpbc7_b249aa27-98b1-40ce-85ab-5b7d0a8edf15/neutron-httpd/0.log" Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.211832 4678 generic.go:334] "Generic (PLEG): container finished" podID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerID="bbc8696e31e40d32dbecf7ee5e9a98685eefd8175538a1c3b9544bf22e9b5886" exitCode=1 Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.211885 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85857bf94-wpbc7" event={"ID":"b249aa27-98b1-40ce-85ab-5b7d0a8edf15","Type":"ContainerDied","Data":"bbc8696e31e40d32dbecf7ee5e9a98685eefd8175538a1c3b9544bf22e9b5886"} Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.212477 4678 scope.go:117] "RemoveContainer" containerID="bbc8696e31e40d32dbecf7ee5e9a98685eefd8175538a1c3b9544bf22e9b5886" Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.217767 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0","Type":"ContainerStarted","Data":"5b020898441443318c7adce8adc71549c7896b4556fc7921288eab96c9bbafea"} Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.217899 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" containerName="glance-log" containerID="cri-o://1877f133cc31f97e1d72ff0e79e782ee259aa4312118c5281383cbab85b489d0" gracePeriod=30 Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.217986 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" containerName="glance-httpd" containerID="cri-o://5b020898441443318c7adce8adc71549c7896b4556fc7921288eab96c9bbafea" gracePeriod=30 Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.228856 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c118536e-63f0-4b11-8c2c-8edfdb3700d3" containerName="glance-log" containerID="cri-o://651603449c2542edac340c9d1856b9f2cab94005b23eda02675cef899f906bb8" gracePeriod=30 Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.228994 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c118536e-63f0-4b11-8c2c-8edfdb3700d3" containerName="glance-httpd" containerID="cri-o://acf59118308637b3a2b5d5b536be4e7e50e3c09051dfbb3dadd499cfb61eb637" gracePeriod=30 Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.236449 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c118536e-63f0-4b11-8c2c-8edfdb3700d3","Type":"ContainerStarted","Data":"acf59118308637b3a2b5d5b536be4e7e50e3c09051dfbb3dadd499cfb61eb637"} Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.255085 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-697d9cc569-8n57v" podStartSLOduration=3.255059862 podStartE2EDuration="3.255059862s" podCreationTimestamp="2025-11-24 11:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:34.223130479 +0000 UTC m=+1265.154190128" watchObservedRunningTime="2025-11-24 11:37:34.255059862 +0000 UTC m=+1265.186119501" Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.268387 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" podStartSLOduration=5.268361778 podStartE2EDuration="5.268361778s" podCreationTimestamp="2025-11-24 11:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:34.24485363 +0000 UTC m=+1265.175913269" watchObservedRunningTime="2025-11-24 11:37:34.268361778 +0000 UTC m=+1265.199421417" Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.279505 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=20.279481895 podStartE2EDuration="20.279481895s" podCreationTimestamp="2025-11-24 11:37:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:34.27744375 +0000 UTC m=+1265.208503389" watchObservedRunningTime="2025-11-24 11:37:34.279481895 +0000 UTC m=+1265.210541534" Nov 24 11:37:34 crc kubenswrapper[4678]: I1124 11:37:34.348506 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=20.348491219 podStartE2EDuration="20.348491219s" podCreationTimestamp="2025-11-24 11:37:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:34.336630142 +0000 UTC m=+1265.267689781" watchObservedRunningTime="2025-11-24 11:37:34.348491219 +0000 UTC m=+1265.279550858" Nov 24 11:37:34 crc kubenswrapper[4678]: E1124 11:37:34.384468 4678 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc118536e_63f0_4b11_8c2c_8edfdb3700d3.slice/crio-651603449c2542edac340c9d1856b9f2cab94005b23eda02675cef899f906bb8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42ac2b38_1a97_4e3d_96ab_cba4f6b5b3f0.slice/crio-1877f133cc31f97e1d72ff0e79e782ee259aa4312118c5281383cbab85b489d0.scope\": RecentStats: unable to find data in memory cache]" Nov 24 11:37:35 crc kubenswrapper[4678]: I1124 11:37:35.257518 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85857bf94-wpbc7_b249aa27-98b1-40ce-85ab-5b7d0a8edf15/neutron-httpd/1.log" Nov 24 11:37:35 crc kubenswrapper[4678]: I1124 11:37:35.259212 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85857bf94-wpbc7_b249aa27-98b1-40ce-85ab-5b7d0a8edf15/neutron-httpd/0.log" Nov 24 11:37:35 crc kubenswrapper[4678]: I1124 11:37:35.259799 4678 generic.go:334] "Generic (PLEG): container finished" podID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerID="e93d74ee7292fb227a0be57ff42c7304c1ab24b81f43c42900ffe41aac64025c" exitCode=1 Nov 24 11:37:35 crc kubenswrapper[4678]: I1124 11:37:35.260033 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85857bf94-wpbc7" event={"ID":"b249aa27-98b1-40ce-85ab-5b7d0a8edf15","Type":"ContainerDied","Data":"e93d74ee7292fb227a0be57ff42c7304c1ab24b81f43c42900ffe41aac64025c"} Nov 24 11:37:35 crc kubenswrapper[4678]: I1124 11:37:35.260149 4678 scope.go:117] "RemoveContainer" containerID="bbc8696e31e40d32dbecf7ee5e9a98685eefd8175538a1c3b9544bf22e9b5886" Nov 24 11:37:35 crc kubenswrapper[4678]: I1124 11:37:35.260653 4678 scope.go:117] "RemoveContainer" containerID="e93d74ee7292fb227a0be57ff42c7304c1ab24b81f43c42900ffe41aac64025c" Nov 24 11:37:35 crc kubenswrapper[4678]: E1124 11:37:35.260966 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-85857bf94-wpbc7_openstack(b249aa27-98b1-40ce-85ab-5b7d0a8edf15)\"" pod="openstack/neutron-85857bf94-wpbc7" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" Nov 24 11:37:35 crc kubenswrapper[4678]: I1124 11:37:35.270215 4678 generic.go:334] "Generic (PLEG): container finished" podID="42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" containerID="5b020898441443318c7adce8adc71549c7896b4556fc7921288eab96c9bbafea" exitCode=0 Nov 24 11:37:35 crc kubenswrapper[4678]: I1124 11:37:35.270241 4678 generic.go:334] "Generic (PLEG): container finished" podID="42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" containerID="1877f133cc31f97e1d72ff0e79e782ee259aa4312118c5281383cbab85b489d0" exitCode=143 Nov 24 11:37:35 crc kubenswrapper[4678]: I1124 11:37:35.270303 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0","Type":"ContainerDied","Data":"5b020898441443318c7adce8adc71549c7896b4556fc7921288eab96c9bbafea"} Nov 24 11:37:35 crc kubenswrapper[4678]: I1124 11:37:35.270382 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0","Type":"ContainerDied","Data":"1877f133cc31f97e1d72ff0e79e782ee259aa4312118c5281383cbab85b489d0"} Nov 24 11:37:35 crc kubenswrapper[4678]: I1124 11:37:35.272338 4678 generic.go:334] "Generic (PLEG): container finished" podID="c118536e-63f0-4b11-8c2c-8edfdb3700d3" containerID="acf59118308637b3a2b5d5b536be4e7e50e3c09051dfbb3dadd499cfb61eb637" exitCode=0 Nov 24 11:37:35 crc kubenswrapper[4678]: I1124 11:37:35.272353 4678 generic.go:334] "Generic (PLEG): container finished" podID="c118536e-63f0-4b11-8c2c-8edfdb3700d3" containerID="651603449c2542edac340c9d1856b9f2cab94005b23eda02675cef899f906bb8" exitCode=143 Nov 24 11:37:35 crc kubenswrapper[4678]: I1124 11:37:35.272389 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c118536e-63f0-4b11-8c2c-8edfdb3700d3","Type":"ContainerDied","Data":"acf59118308637b3a2b5d5b536be4e7e50e3c09051dfbb3dadd499cfb61eb637"} Nov 24 11:37:35 crc kubenswrapper[4678]: I1124 11:37:35.272419 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c118536e-63f0-4b11-8c2c-8edfdb3700d3","Type":"ContainerDied","Data":"651603449c2542edac340c9d1856b9f2cab94005b23eda02675cef899f906bb8"} Nov 24 11:37:35 crc kubenswrapper[4678]: I1124 11:37:35.273708 4678 generic.go:334] "Generic (PLEG): container finished" podID="4bebde18-e99d-49a3-bb56-5f0de9049363" containerID="84292b8ff95df849599cdd6b81c24ffb6a598d8bd67407b695d8c64170cb7699" exitCode=0 Nov 24 11:37:35 crc kubenswrapper[4678]: I1124 11:37:35.274642 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4qbq8" event={"ID":"4bebde18-e99d-49a3-bb56-5f0de9049363","Type":"ContainerDied","Data":"84292b8ff95df849599cdd6b81c24ffb6a598d8bd67407b695d8c64170cb7699"} Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.305616 4678 generic.go:334] "Generic (PLEG): container finished" podID="82d67de7-2cd2-480b-b8f9-1c73bff16add" containerID="7fed17068414762afc89bebe3b204fd97ca53935dd335d0eb07056a90449e648" exitCode=0 Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.305773 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bcswl" event={"ID":"82d67de7-2cd2-480b-b8f9-1c73bff16add","Type":"ContainerDied","Data":"7fed17068414762afc89bebe3b204fd97ca53935dd335d0eb07056a90449e648"} Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.306755 4678 scope.go:117] "RemoveContainer" containerID="e93d74ee7292fb227a0be57ff42c7304c1ab24b81f43c42900ffe41aac64025c" Nov 24 11:37:36 crc kubenswrapper[4678]: E1124 11:37:36.307003 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-85857bf94-wpbc7_openstack(b249aa27-98b1-40ce-85ab-5b7d0a8edf15)\"" pod="openstack/neutron-85857bf94-wpbc7" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.402814 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.439610 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gnxd\" (UniqueName: \"kubernetes.io/projected/c118536e-63f0-4b11-8c2c-8edfdb3700d3-kube-api-access-8gnxd\") pod \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.439972 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-scripts\") pod \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.440081 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-config-data\") pod \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.440098 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-combined-ca-bundle\") pod \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.440129 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c118536e-63f0-4b11-8c2c-8edfdb3700d3-httpd-run\") pod \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.440147 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.440245 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c118536e-63f0-4b11-8c2c-8edfdb3700d3-logs\") pod \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\" (UID: \"c118536e-63f0-4b11-8c2c-8edfdb3700d3\") " Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.445790 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c118536e-63f0-4b11-8c2c-8edfdb3700d3-logs" (OuterVolumeSpecName: "logs") pod "c118536e-63f0-4b11-8c2c-8edfdb3700d3" (UID: "c118536e-63f0-4b11-8c2c-8edfdb3700d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.448907 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c118536e-63f0-4b11-8c2c-8edfdb3700d3-kube-api-access-8gnxd" (OuterVolumeSpecName: "kube-api-access-8gnxd") pod "c118536e-63f0-4b11-8c2c-8edfdb3700d3" (UID: "c118536e-63f0-4b11-8c2c-8edfdb3700d3"). InnerVolumeSpecName "kube-api-access-8gnxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.453275 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c118536e-63f0-4b11-8c2c-8edfdb3700d3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c118536e-63f0-4b11-8c2c-8edfdb3700d3" (UID: "c118536e-63f0-4b11-8c2c-8edfdb3700d3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.457336 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "c118536e-63f0-4b11-8c2c-8edfdb3700d3" (UID: "c118536e-63f0-4b11-8c2c-8edfdb3700d3"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.476220 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-scripts" (OuterVolumeSpecName: "scripts") pod "c118536e-63f0-4b11-8c2c-8edfdb3700d3" (UID: "c118536e-63f0-4b11-8c2c-8edfdb3700d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.479490 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c118536e-63f0-4b11-8c2c-8edfdb3700d3" (UID: "c118536e-63f0-4b11-8c2c-8edfdb3700d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.531150 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-config-data" (OuterVolumeSpecName: "config-data") pod "c118536e-63f0-4b11-8c2c-8edfdb3700d3" (UID: "c118536e-63f0-4b11-8c2c-8edfdb3700d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.544035 4678 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c118536e-63f0-4b11-8c2c-8edfdb3700d3-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.544072 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gnxd\" (UniqueName: \"kubernetes.io/projected/c118536e-63f0-4b11-8c2c-8edfdb3700d3-kube-api-access-8gnxd\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.544084 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.544092 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.544101 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c118536e-63f0-4b11-8c2c-8edfdb3700d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.544109 4678 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c118536e-63f0-4b11-8c2c-8edfdb3700d3-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.544143 4678 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.565987 4678 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.646103 4678 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.950701 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 24 11:37:36 crc kubenswrapper[4678]: I1124 11:37:36.957284 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.317158 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.317344 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c118536e-63f0-4b11-8c2c-8edfdb3700d3","Type":"ContainerDied","Data":"905cef1e8753f13dce74c19c1412c0fc314c4e529ed0d237987d8454116c6b80"} Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.318918 4678 generic.go:334] "Generic (PLEG): container finished" podID="195eda15-ecc1-4041-b42e-ffe751e686af" containerID="43dee8dd2a553aeca802b33092d914631dc3a26f4437d9fd32976b28a51fd95b" exitCode=0 Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.318994 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x5lx5" event={"ID":"195eda15-ecc1-4041-b42e-ffe751e686af","Type":"ContainerDied","Data":"43dee8dd2a553aeca802b33092d914631dc3a26f4437d9fd32976b28a51fd95b"} Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.325000 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.398803 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.408255 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.426363 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:37:37 crc kubenswrapper[4678]: E1124 11:37:37.426925 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c118536e-63f0-4b11-8c2c-8edfdb3700d3" containerName="glance-httpd" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.426949 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c118536e-63f0-4b11-8c2c-8edfdb3700d3" containerName="glance-httpd" Nov 24 11:37:37 crc kubenswrapper[4678]: E1124 11:37:37.426984 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e72e3f7-0533-462d-b9d0-7df8c8de0108" containerName="init" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.426995 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e72e3f7-0533-462d-b9d0-7df8c8de0108" containerName="init" Nov 24 11:37:37 crc kubenswrapper[4678]: E1124 11:37:37.427021 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c118536e-63f0-4b11-8c2c-8edfdb3700d3" containerName="glance-log" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.427029 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c118536e-63f0-4b11-8c2c-8edfdb3700d3" containerName="glance-log" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.427279 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="c118536e-63f0-4b11-8c2c-8edfdb3700d3" containerName="glance-httpd" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.427322 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e72e3f7-0533-462d-b9d0-7df8c8de0108" containerName="init" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.427337 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="c118536e-63f0-4b11-8c2c-8edfdb3700d3" containerName="glance-log" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.428894 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.434254 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.434863 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.461808 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.464470 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/765f2f85-0026-4941-94d4-8fb2f913d46d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.464538 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.464614 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.464690 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.464777 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5bmw\" (UniqueName: \"kubernetes.io/projected/765f2f85-0026-4941-94d4-8fb2f913d46d-kube-api-access-d5bmw\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.464822 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.464842 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/765f2f85-0026-4941-94d4-8fb2f913d46d-logs\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.464907 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.566325 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.566417 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.566466 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.566537 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5bmw\" (UniqueName: \"kubernetes.io/projected/765f2f85-0026-4941-94d4-8fb2f913d46d-kube-api-access-d5bmw\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.566570 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.566589 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/765f2f85-0026-4941-94d4-8fb2f913d46d-logs\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.566655 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.566730 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/765f2f85-0026-4941-94d4-8fb2f913d46d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.567373 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/765f2f85-0026-4941-94d4-8fb2f913d46d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.567385 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.568384 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/765f2f85-0026-4941-94d4-8fb2f913d46d-logs\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.580314 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.580955 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.582480 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.584165 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.586999 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5bmw\" (UniqueName: \"kubernetes.io/projected/765f2f85-0026-4941-94d4-8fb2f913d46d-kube-api-access-d5bmw\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.617643 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.748949 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 11:37:37 crc kubenswrapper[4678]: I1124 11:37:37.909968 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c118536e-63f0-4b11-8c2c-8edfdb3700d3" path="/var/lib/kubelet/pods/c118536e-63f0-4b11-8c2c-8edfdb3700d3/volumes" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.318549 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.321250 4678 scope.go:117] "RemoveContainer" containerID="acf59118308637b3a2b5d5b536be4e7e50e3c09051dfbb3dadd499cfb61eb637" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.335797 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bcswl" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.367701 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.371649 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.371642 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0","Type":"ContainerDied","Data":"9c707a078e1867b23889451e734c588f1f5b2e0f6ec741cce7b1bc9e2c7359ee"} Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.376426 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.378095 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85857bf94-wpbc7_b249aa27-98b1-40ce-85ab-5b7d0a8edf15/neutron-httpd/1.log" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.380223 4678 scope.go:117] "RemoveContainer" containerID="651603449c2542edac340c9d1856b9f2cab94005b23eda02675cef899f906bb8" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.391846 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bcswl" event={"ID":"82d67de7-2cd2-480b-b8f9-1c73bff16add","Type":"ContainerDied","Data":"e4bf7fd9675516f01796d9f35b6cdef968b4fcf52a8a25835f187b5cf8fe69c4"} Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.391883 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4bf7fd9675516f01796d9f35b6cdef968b4fcf52a8a25835f187b5cf8fe69c4" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.391954 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bcswl" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.411402 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k87l\" (UniqueName: \"kubernetes.io/projected/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-kube-api-access-8k87l\") pod \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.411507 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-httpd-run\") pod \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.411545 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-config-data\") pod \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.411600 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-logs\") pod \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.411700 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.411759 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-scripts\") pod \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.411780 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-combined-ca-bundle\") pod \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\" (UID: \"42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.413741 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-logs" (OuterVolumeSpecName: "logs") pod "42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" (UID: "42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.414009 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4qbq8" event={"ID":"4bebde18-e99d-49a3-bb56-5f0de9049363","Type":"ContainerDied","Data":"cf4dc55d46b240af07302443c7959900ac7a2adf58a9d7538d1aa8ebf4b7c6de"} Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.414081 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf4dc55d46b240af07302443c7959900ac7a2adf58a9d7538d1aa8ebf4b7c6de" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.414941 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4qbq8" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.416032 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" (UID: "42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.417470 4678 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.417507 4678 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.424538 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" (UID: "42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.424933 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x5lx5" event={"ID":"195eda15-ecc1-4041-b42e-ffe751e686af","Type":"ContainerDied","Data":"4e5be1505b4a5b88729d6bfb00dd94637f7c810926dd573175a7c973ca097102"} Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.424978 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e5be1505b4a5b88729d6bfb00dd94637f7c810926dd573175a7c973ca097102" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.425051 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x5lx5" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.425901 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-kube-api-access-8k87l" (OuterVolumeSpecName: "kube-api-access-8k87l") pod "42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" (UID: "42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0"). InnerVolumeSpecName "kube-api-access-8k87l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.427769 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-scripts" (OuterVolumeSpecName: "scripts") pod "42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" (UID: "42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.443893 4678 scope.go:117] "RemoveContainer" containerID="5b020898441443318c7adce8adc71549c7896b4556fc7921288eab96c9bbafea" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.450868 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" (UID: "42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.482325 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-config-data" (OuterVolumeSpecName: "config-data") pod "42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" (UID: "42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.518994 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-fernet-keys\") pod \"195eda15-ecc1-4041-b42e-ffe751e686af\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.519152 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bebde18-e99d-49a3-bb56-5f0de9049363-logs\") pod \"4bebde18-e99d-49a3-bb56-5f0de9049363\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.519358 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-credential-keys\") pod \"195eda15-ecc1-4041-b42e-ffe751e686af\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.519467 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xwk4\" (UniqueName: \"kubernetes.io/projected/82d67de7-2cd2-480b-b8f9-1c73bff16add-kube-api-access-4xwk4\") pod \"82d67de7-2cd2-480b-b8f9-1c73bff16add\" (UID: \"82d67de7-2cd2-480b-b8f9-1c73bff16add\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.519490 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82d67de7-2cd2-480b-b8f9-1c73bff16add-db-sync-config-data\") pod \"82d67de7-2cd2-480b-b8f9-1c73bff16add\" (UID: \"82d67de7-2cd2-480b-b8f9-1c73bff16add\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.519527 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-scripts\") pod \"195eda15-ecc1-4041-b42e-ffe751e686af\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.519562 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc45b\" (UniqueName: \"kubernetes.io/projected/4bebde18-e99d-49a3-bb56-5f0de9049363-kube-api-access-hc45b\") pod \"4bebde18-e99d-49a3-bb56-5f0de9049363\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.519615 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d67de7-2cd2-480b-b8f9-1c73bff16add-combined-ca-bundle\") pod \"82d67de7-2cd2-480b-b8f9-1c73bff16add\" (UID: \"82d67de7-2cd2-480b-b8f9-1c73bff16add\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.519699 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-config-data\") pod \"195eda15-ecc1-4041-b42e-ffe751e686af\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.519781 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-scripts\") pod \"4bebde18-e99d-49a3-bb56-5f0de9049363\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.519818 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkxvg\" (UniqueName: \"kubernetes.io/projected/195eda15-ecc1-4041-b42e-ffe751e686af-kube-api-access-tkxvg\") pod \"195eda15-ecc1-4041-b42e-ffe751e686af\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.519855 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-config-data\") pod \"4bebde18-e99d-49a3-bb56-5f0de9049363\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.519910 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-combined-ca-bundle\") pod \"4bebde18-e99d-49a3-bb56-5f0de9049363\" (UID: \"4bebde18-e99d-49a3-bb56-5f0de9049363\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.519933 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-combined-ca-bundle\") pod \"195eda15-ecc1-4041-b42e-ffe751e686af\" (UID: \"195eda15-ecc1-4041-b42e-ffe751e686af\") " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.520606 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k87l\" (UniqueName: \"kubernetes.io/projected/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-kube-api-access-8k87l\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.520621 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.520654 4678 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.520677 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.520689 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.524822 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "195eda15-ecc1-4041-b42e-ffe751e686af" (UID: "195eda15-ecc1-4041-b42e-ffe751e686af"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.526061 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d67de7-2cd2-480b-b8f9-1c73bff16add-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "82d67de7-2cd2-480b-b8f9-1c73bff16add" (UID: "82d67de7-2cd2-480b-b8f9-1c73bff16add"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.526268 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "195eda15-ecc1-4041-b42e-ffe751e686af" (UID: "195eda15-ecc1-4041-b42e-ffe751e686af"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.526754 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bebde18-e99d-49a3-bb56-5f0de9049363-logs" (OuterVolumeSpecName: "logs") pod "4bebde18-e99d-49a3-bb56-5f0de9049363" (UID: "4bebde18-e99d-49a3-bb56-5f0de9049363"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.527488 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d67de7-2cd2-480b-b8f9-1c73bff16add-kube-api-access-4xwk4" (OuterVolumeSpecName: "kube-api-access-4xwk4") pod "82d67de7-2cd2-480b-b8f9-1c73bff16add" (UID: "82d67de7-2cd2-480b-b8f9-1c73bff16add"). InnerVolumeSpecName "kube-api-access-4xwk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.531400 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/195eda15-ecc1-4041-b42e-ffe751e686af-kube-api-access-tkxvg" (OuterVolumeSpecName: "kube-api-access-tkxvg") pod "195eda15-ecc1-4041-b42e-ffe751e686af" (UID: "195eda15-ecc1-4041-b42e-ffe751e686af"). InnerVolumeSpecName "kube-api-access-tkxvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.542362 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bebde18-e99d-49a3-bb56-5f0de9049363-kube-api-access-hc45b" (OuterVolumeSpecName: "kube-api-access-hc45b") pod "4bebde18-e99d-49a3-bb56-5f0de9049363" (UID: "4bebde18-e99d-49a3-bb56-5f0de9049363"). InnerVolumeSpecName "kube-api-access-hc45b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.550909 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-scripts" (OuterVolumeSpecName: "scripts") pod "4bebde18-e99d-49a3-bb56-5f0de9049363" (UID: "4bebde18-e99d-49a3-bb56-5f0de9049363"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.553412 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-scripts" (OuterVolumeSpecName: "scripts") pod "195eda15-ecc1-4041-b42e-ffe751e686af" (UID: "195eda15-ecc1-4041-b42e-ffe751e686af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.573082 4678 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.579200 4678 scope.go:117] "RemoveContainer" containerID="1877f133cc31f97e1d72ff0e79e782ee259aa4312118c5281383cbab85b489d0" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.589570 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "195eda15-ecc1-4041-b42e-ffe751e686af" (UID: "195eda15-ecc1-4041-b42e-ffe751e686af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.590984 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-config-data" (OuterVolumeSpecName: "config-data") pod "4bebde18-e99d-49a3-bb56-5f0de9049363" (UID: "4bebde18-e99d-49a3-bb56-5f0de9049363"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.592175 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d67de7-2cd2-480b-b8f9-1c73bff16add-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82d67de7-2cd2-480b-b8f9-1c73bff16add" (UID: "82d67de7-2cd2-480b-b8f9-1c73bff16add"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.609325 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-config-data" (OuterVolumeSpecName: "config-data") pod "195eda15-ecc1-4041-b42e-ffe751e686af" (UID: "195eda15-ecc1-4041-b42e-ffe751e686af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.618903 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bebde18-e99d-49a3-bb56-5f0de9049363" (UID: "4bebde18-e99d-49a3-bb56-5f0de9049363"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.622622 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.622658 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc45b\" (UniqueName: \"kubernetes.io/projected/4bebde18-e99d-49a3-bb56-5f0de9049363-kube-api-access-hc45b\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.622688 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d67de7-2cd2-480b-b8f9-1c73bff16add-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.622700 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.622710 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.622721 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkxvg\" (UniqueName: \"kubernetes.io/projected/195eda15-ecc1-4041-b42e-ffe751e686af-kube-api-access-tkxvg\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.622730 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.622738 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bebde18-e99d-49a3-bb56-5f0de9049363-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.622746 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.622754 4678 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.622763 4678 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bebde18-e99d-49a3-bb56-5f0de9049363-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.622773 4678 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.622783 4678 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/195eda15-ecc1-4041-b42e-ffe751e686af-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.622794 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xwk4\" (UniqueName: \"kubernetes.io/projected/82d67de7-2cd2-480b-b8f9-1c73bff16add-kube-api-access-4xwk4\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.622805 4678 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82d67de7-2cd2-480b-b8f9-1c73bff16add-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.778713 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.800909 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.823790 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:37:39 crc kubenswrapper[4678]: E1124 11:37:39.824297 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" containerName="glance-httpd" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.824315 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" containerName="glance-httpd" Nov 24 11:37:39 crc kubenswrapper[4678]: E1124 11:37:39.824339 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bebde18-e99d-49a3-bb56-5f0de9049363" containerName="placement-db-sync" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.824347 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bebde18-e99d-49a3-bb56-5f0de9049363" containerName="placement-db-sync" Nov 24 11:37:39 crc kubenswrapper[4678]: E1124 11:37:39.824358 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" containerName="glance-log" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.824364 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" containerName="glance-log" Nov 24 11:37:39 crc kubenswrapper[4678]: E1124 11:37:39.824386 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195eda15-ecc1-4041-b42e-ffe751e686af" containerName="keystone-bootstrap" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.824392 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="195eda15-ecc1-4041-b42e-ffe751e686af" containerName="keystone-bootstrap" Nov 24 11:37:39 crc kubenswrapper[4678]: E1124 11:37:39.824409 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d67de7-2cd2-480b-b8f9-1c73bff16add" containerName="barbican-db-sync" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.824415 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d67de7-2cd2-480b-b8f9-1c73bff16add" containerName="barbican-db-sync" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.824612 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d67de7-2cd2-480b-b8f9-1c73bff16add" containerName="barbican-db-sync" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.824632 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bebde18-e99d-49a3-bb56-5f0de9049363" containerName="placement-db-sync" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.824646 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="195eda15-ecc1-4041-b42e-ffe751e686af" containerName="keystone-bootstrap" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.824657 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" containerName="glance-log" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.824663 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" containerName="glance-httpd" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.825752 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.844586 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.846539 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.847325 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.877716 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:37:39 crc kubenswrapper[4678]: W1124 11:37:39.879784 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod765f2f85_0026_4941_94d4_8fb2f913d46d.slice/crio-6e32eb25e1274aee30c86bde3902a5193bd85b7c6914bfe4d989da4d10c050d4 WatchSource:0}: Error finding container 6e32eb25e1274aee30c86bde3902a5193bd85b7c6914bfe4d989da4d10c050d4: Status 404 returned error can't find the container with id 6e32eb25e1274aee30c86bde3902a5193bd85b7c6914bfe4d989da4d10c050d4 Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.907624 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0" path="/var/lib/kubelet/pods/42ac2b38-1a97-4e3d-96ab-cba4f6b5b3f0/volumes" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.928314 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f345f7d-85e6-4995-9706-3189c846de37-logs\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.928374 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt6tj\" (UniqueName: \"kubernetes.io/projected/7f345f7d-85e6-4995-9706-3189c846de37-kube-api-access-mt6tj\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.928428 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f345f7d-85e6-4995-9706-3189c846de37-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.928538 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-scripts\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.928570 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.928621 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-config-data\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.928702 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:39 crc kubenswrapper[4678]: I1124 11:37:39.928740 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.030332 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.030452 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-config-data\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.030507 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.030703 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.031224 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.031296 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f345f7d-85e6-4995-9706-3189c846de37-logs\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.031340 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt6tj\" (UniqueName: \"kubernetes.io/projected/7f345f7d-85e6-4995-9706-3189c846de37-kube-api-access-mt6tj\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.031432 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f345f7d-85e6-4995-9706-3189c846de37-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.031709 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-scripts\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.032372 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f345f7d-85e6-4995-9706-3189c846de37-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.032812 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f345f7d-85e6-4995-9706-3189c846de37-logs\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.035010 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-scripts\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.036490 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.037289 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.047491 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-config-data\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.051840 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt6tj\" (UniqueName: \"kubernetes.io/projected/7f345f7d-85e6-4995-9706-3189c846de37-kube-api-access-mt6tj\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.067136 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.070945 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.158191 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-2wzjt"] Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.158446 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" podUID="39a2ab81-7e34-43bf-94ad-47a0452dbbfa" containerName="dnsmasq-dns" containerID="cri-o://3a47c31434a8727ba90b97cefe8e96a410dee7ea9b5df00c1be488ebc00c5df5" gracePeriod=10 Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.168101 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.472756 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" podUID="39a2ab81-7e34-43bf-94ad-47a0452dbbfa" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.181:5353: connect: connection refused" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.531433 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26fa8015-2aea-4aaf-baaf-bdcc15096441","Type":"ContainerStarted","Data":"5c43fd958f24e3e400ad433b9b33e6b6e30b2210cc5822704c115f0f59abfbab"} Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.535794 4678 generic.go:334] "Generic (PLEG): container finished" podID="39a2ab81-7e34-43bf-94ad-47a0452dbbfa" containerID="3a47c31434a8727ba90b97cefe8e96a410dee7ea9b5df00c1be488ebc00c5df5" exitCode=0 Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.535871 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" event={"ID":"39a2ab81-7e34-43bf-94ad-47a0452dbbfa","Type":"ContainerDied","Data":"3a47c31434a8727ba90b97cefe8e96a410dee7ea9b5df00c1be488ebc00c5df5"} Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.537439 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"765f2f85-0026-4941-94d4-8fb2f913d46d","Type":"ContainerStarted","Data":"6e32eb25e1274aee30c86bde3902a5193bd85b7c6914bfe4d989da4d10c050d4"} Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.569965 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7cb75676bc-dmjv6"] Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.571437 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.575466 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.575637 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.576125 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.576215 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.576300 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cvvbb" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.576602 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.605425 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7f4c4bbb96-gnmrh"] Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.610592 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.617729 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.619443 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.620123 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.620227 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.620328 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mw7nj" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.642764 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f4c4bbb96-gnmrh"] Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.670893 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7cb75676bc-dmjv6"] Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.686792 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-public-tls-certs\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.686844 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-credential-keys\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.686873 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-scripts\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.686927 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-internal-tls-certs\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.686946 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-config-data\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.687031 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-combined-ca-bundle\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.687110 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-fernet-keys\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.687167 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcrr9\" (UniqueName: \"kubernetes.io/projected/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-kube-api-access-kcrr9\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.762775 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6fcdf46c94-52rq9"] Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.764766 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.771705 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.772065 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.772243 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-49hht" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.787760 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6fcdf46c94-52rq9"] Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.790777 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-fernet-keys\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.790835 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18ccf264-50f3-476e-9640-1a4f3d23044f-logs\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.790869 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7l5n\" (UniqueName: \"kubernetes.io/projected/18ccf264-50f3-476e-9640-1a4f3d23044f-kube-api-access-m7l5n\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.790900 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcrr9\" (UniqueName: \"kubernetes.io/projected/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-kube-api-access-kcrr9\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.790940 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ccf264-50f3-476e-9640-1a4f3d23044f-internal-tls-certs\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.791020 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18ccf264-50f3-476e-9640-1a4f3d23044f-scripts\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.791048 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-public-tls-certs\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.791076 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-credential-keys\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.791131 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-scripts\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.791177 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-internal-tls-certs\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.791656 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-config-data\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.801797 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18ccf264-50f3-476e-9640-1a4f3d23044f-config-data\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.801949 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-combined-ca-bundle\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.802001 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ccf264-50f3-476e-9640-1a4f3d23044f-combined-ca-bundle\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.802076 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ccf264-50f3-476e-9640-1a4f3d23044f-public-tls-certs\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.810576 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-fernet-keys\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.815783 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-public-tls-certs\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.828333 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-internal-tls-certs\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.830035 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-scripts\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.832281 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-config-data\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.833447 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-credential-keys\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.833628 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-combined-ca-bundle\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.872688 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcrr9\" (UniqueName: \"kubernetes.io/projected/eed6b8b9-3443-42af-ab2e-b8695cf8b1e8-kube-api-access-kcrr9\") pod \"keystone-7cb75676bc-dmjv6\" (UID: \"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8\") " pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.873308 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-586bfddf5f-xk2jd"] Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.875305 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.882301 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.904187 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18ccf264-50f3-476e-9640-1a4f3d23044f-logs\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.904234 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7l5n\" (UniqueName: \"kubernetes.io/projected/18ccf264-50f3-476e-9640-1a4f3d23044f-kube-api-access-m7l5n\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.904287 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ccf264-50f3-476e-9640-1a4f3d23044f-internal-tls-certs\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.904376 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18ccf264-50f3-476e-9640-1a4f3d23044f-scripts\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.904435 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44457729-ea53-4b02-bb60-00cd81170d9b-combined-ca-bundle\") pod \"barbican-keystone-listener-6fcdf46c94-52rq9\" (UID: \"44457729-ea53-4b02-bb60-00cd81170d9b\") " pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.904477 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44457729-ea53-4b02-bb60-00cd81170d9b-config-data\") pod \"barbican-keystone-listener-6fcdf46c94-52rq9\" (UID: \"44457729-ea53-4b02-bb60-00cd81170d9b\") " pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.904503 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh76k\" (UniqueName: \"kubernetes.io/projected/44457729-ea53-4b02-bb60-00cd81170d9b-kube-api-access-jh76k\") pod \"barbican-keystone-listener-6fcdf46c94-52rq9\" (UID: \"44457729-ea53-4b02-bb60-00cd81170d9b\") " pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.904545 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44457729-ea53-4b02-bb60-00cd81170d9b-logs\") pod \"barbican-keystone-listener-6fcdf46c94-52rq9\" (UID: \"44457729-ea53-4b02-bb60-00cd81170d9b\") " pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.904570 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18ccf264-50f3-476e-9640-1a4f3d23044f-config-data\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.904600 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44457729-ea53-4b02-bb60-00cd81170d9b-config-data-custom\") pod \"barbican-keystone-listener-6fcdf46c94-52rq9\" (UID: \"44457729-ea53-4b02-bb60-00cd81170d9b\") " pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.904658 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ccf264-50f3-476e-9640-1a4f3d23044f-combined-ca-bundle\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.904717 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ccf264-50f3-476e-9640-1a4f3d23044f-public-tls-certs\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.909233 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18ccf264-50f3-476e-9640-1a4f3d23044f-scripts\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.915849 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-586bfddf5f-xk2jd"] Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.918825 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18ccf264-50f3-476e-9640-1a4f3d23044f-logs\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.923623 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ccf264-50f3-476e-9640-1a4f3d23044f-public-tls-certs\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.927258 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ccf264-50f3-476e-9640-1a4f3d23044f-combined-ca-bundle\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.930621 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mrl65"] Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.932211 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ccf264-50f3-476e-9640-1a4f3d23044f-internal-tls-certs\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.933335 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18ccf264-50f3-476e-9640-1a4f3d23044f-config-data\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.933566 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.968907 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mrl65"] Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.970174 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7l5n\" (UniqueName: \"kubernetes.io/projected/18ccf264-50f3-476e-9640-1a4f3d23044f-kube-api-access-m7l5n\") pod \"placement-7f4c4bbb96-gnmrh\" (UID: \"18ccf264-50f3-476e-9640-1a4f3d23044f\") " pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.982648 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-fd86b57f4-94kch"] Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.985314 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.989108 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:40 crc kubenswrapper[4678]: I1124 11:37:40.992266 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.000154 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.005187 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-fd86b57f4-94kch"] Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.010201 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44457729-ea53-4b02-bb60-00cd81170d9b-combined-ca-bundle\") pod \"barbican-keystone-listener-6fcdf46c94-52rq9\" (UID: \"44457729-ea53-4b02-bb60-00cd81170d9b\") " pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.011846 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44457729-ea53-4b02-bb60-00cd81170d9b-config-data\") pod \"barbican-keystone-listener-6fcdf46c94-52rq9\" (UID: \"44457729-ea53-4b02-bb60-00cd81170d9b\") " pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.011983 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh76k\" (UniqueName: \"kubernetes.io/projected/44457729-ea53-4b02-bb60-00cd81170d9b-kube-api-access-jh76k\") pod \"barbican-keystone-listener-6fcdf46c94-52rq9\" (UID: \"44457729-ea53-4b02-bb60-00cd81170d9b\") " pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.013686 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44457729-ea53-4b02-bb60-00cd81170d9b-logs\") pod \"barbican-keystone-listener-6fcdf46c94-52rq9\" (UID: \"44457729-ea53-4b02-bb60-00cd81170d9b\") " pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.014826 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44457729-ea53-4b02-bb60-00cd81170d9b-config-data-custom\") pod \"barbican-keystone-listener-6fcdf46c94-52rq9\" (UID: \"44457729-ea53-4b02-bb60-00cd81170d9b\") " pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.015034 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqqkn\" (UniqueName: \"kubernetes.io/projected/ea290c11-6cf3-425a-a5be-749d3563adaa-kube-api-access-tqqkn\") pod \"barbican-worker-586bfddf5f-xk2jd\" (UID: \"ea290c11-6cf3-425a-a5be-749d3563adaa\") " pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.015477 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea290c11-6cf3-425a-a5be-749d3563adaa-config-data\") pod \"barbican-worker-586bfddf5f-xk2jd\" (UID: \"ea290c11-6cf3-425a-a5be-749d3563adaa\") " pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.021076 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea290c11-6cf3-425a-a5be-749d3563adaa-config-data-custom\") pod \"barbican-worker-586bfddf5f-xk2jd\" (UID: \"ea290c11-6cf3-425a-a5be-749d3563adaa\") " pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.021768 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea290c11-6cf3-425a-a5be-749d3563adaa-combined-ca-bundle\") pod \"barbican-worker-586bfddf5f-xk2jd\" (UID: \"ea290c11-6cf3-425a-a5be-749d3563adaa\") " pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.020847 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44457729-ea53-4b02-bb60-00cd81170d9b-combined-ca-bundle\") pod \"barbican-keystone-listener-6fcdf46c94-52rq9\" (UID: \"44457729-ea53-4b02-bb60-00cd81170d9b\") " pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.015753 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44457729-ea53-4b02-bb60-00cd81170d9b-logs\") pod \"barbican-keystone-listener-6fcdf46c94-52rq9\" (UID: \"44457729-ea53-4b02-bb60-00cd81170d9b\") " pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.021735 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44457729-ea53-4b02-bb60-00cd81170d9b-config-data-custom\") pod \"barbican-keystone-listener-6fcdf46c94-52rq9\" (UID: \"44457729-ea53-4b02-bb60-00cd81170d9b\") " pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.019803 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44457729-ea53-4b02-bb60-00cd81170d9b-config-data\") pod \"barbican-keystone-listener-6fcdf46c94-52rq9\" (UID: \"44457729-ea53-4b02-bb60-00cd81170d9b\") " pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.023185 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea290c11-6cf3-425a-a5be-749d3563adaa-logs\") pod \"barbican-worker-586bfddf5f-xk2jd\" (UID: \"ea290c11-6cf3-425a-a5be-749d3563adaa\") " pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.047757 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh76k\" (UniqueName: \"kubernetes.io/projected/44457729-ea53-4b02-bb60-00cd81170d9b-kube-api-access-jh76k\") pod \"barbican-keystone-listener-6fcdf46c94-52rq9\" (UID: \"44457729-ea53-4b02-bb60-00cd81170d9b\") " pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.096219 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.126229 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/030f716d-d22a-4024-972e-4c3261a22325-logs\") pod \"barbican-api-fd86b57f4-94kch\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.126307 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.126329 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.126370 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-combined-ca-bundle\") pod \"barbican-api-fd86b57f4-94kch\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.126406 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqqkn\" (UniqueName: \"kubernetes.io/projected/ea290c11-6cf3-425a-a5be-749d3563adaa-kube-api-access-tqqkn\") pod \"barbican-worker-586bfddf5f-xk2jd\" (UID: \"ea290c11-6cf3-425a-a5be-749d3563adaa\") " pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.126451 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea290c11-6cf3-425a-a5be-749d3563adaa-config-data\") pod \"barbican-worker-586bfddf5f-xk2jd\" (UID: \"ea290c11-6cf3-425a-a5be-749d3563adaa\") " pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.126472 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.126507 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwwtx\" (UniqueName: \"kubernetes.io/projected/030f716d-d22a-4024-972e-4c3261a22325-kube-api-access-cwwtx\") pod \"barbican-api-fd86b57f4-94kch\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.126534 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.126555 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-config\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.126597 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea290c11-6cf3-425a-a5be-749d3563adaa-config-data-custom\") pod \"barbican-worker-586bfddf5f-xk2jd\" (UID: \"ea290c11-6cf3-425a-a5be-749d3563adaa\") " pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.126621 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea290c11-6cf3-425a-a5be-749d3563adaa-combined-ca-bundle\") pod \"barbican-worker-586bfddf5f-xk2jd\" (UID: \"ea290c11-6cf3-425a-a5be-749d3563adaa\") " pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.126655 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-872m4\" (UniqueName: \"kubernetes.io/projected/7ab25308-baab-4b92-8bbb-7525b0e96550-kube-api-access-872m4\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.126707 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-config-data\") pod \"barbican-api-fd86b57f4-94kch\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.126737 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea290c11-6cf3-425a-a5be-749d3563adaa-logs\") pod \"barbican-worker-586bfddf5f-xk2jd\" (UID: \"ea290c11-6cf3-425a-a5be-749d3563adaa\") " pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.126781 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-config-data-custom\") pod \"barbican-api-fd86b57f4-94kch\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.147958 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea290c11-6cf3-425a-a5be-749d3563adaa-logs\") pod \"barbican-worker-586bfddf5f-xk2jd\" (UID: \"ea290c11-6cf3-425a-a5be-749d3563adaa\") " pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.150583 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea290c11-6cf3-425a-a5be-749d3563adaa-combined-ca-bundle\") pod \"barbican-worker-586bfddf5f-xk2jd\" (UID: \"ea290c11-6cf3-425a-a5be-749d3563adaa\") " pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.158235 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea290c11-6cf3-425a-a5be-749d3563adaa-config-data\") pod \"barbican-worker-586bfddf5f-xk2jd\" (UID: \"ea290c11-6cf3-425a-a5be-749d3563adaa\") " pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.160966 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea290c11-6cf3-425a-a5be-749d3563adaa-config-data-custom\") pod \"barbican-worker-586bfddf5f-xk2jd\" (UID: \"ea290c11-6cf3-425a-a5be-749d3563adaa\") " pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.166312 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqqkn\" (UniqueName: \"kubernetes.io/projected/ea290c11-6cf3-425a-a5be-749d3563adaa-kube-api-access-tqqkn\") pod \"barbican-worker-586bfddf5f-xk2jd\" (UID: \"ea290c11-6cf3-425a-a5be-749d3563adaa\") " pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.227626 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-ovsdbserver-nb\") pod \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.227950 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-ovsdbserver-sb\") pod \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.228004 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-dns-swift-storage-0\") pod \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.228127 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzn5m\" (UniqueName: \"kubernetes.io/projected/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-kube-api-access-vzn5m\") pod \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.228268 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-dns-svc\") pod \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.228296 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-config\") pod \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\" (UID: \"39a2ab81-7e34-43bf-94ad-47a0452dbbfa\") " Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.228736 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.228855 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwwtx\" (UniqueName: \"kubernetes.io/projected/030f716d-d22a-4024-972e-4c3261a22325-kube-api-access-cwwtx\") pod \"barbican-api-fd86b57f4-94kch\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.228895 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.228947 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-config\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.229001 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-872m4\" (UniqueName: \"kubernetes.io/projected/7ab25308-baab-4b92-8bbb-7525b0e96550-kube-api-access-872m4\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.229036 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-config-data\") pod \"barbican-api-fd86b57f4-94kch\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.229081 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-config-data-custom\") pod \"barbican-api-fd86b57f4-94kch\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.229145 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/030f716d-d22a-4024-972e-4c3261a22325-logs\") pod \"barbican-api-fd86b57f4-94kch\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.229209 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.229240 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.229285 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-combined-ca-bundle\") pod \"barbican-api-fd86b57f4-94kch\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.230538 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/030f716d-d22a-4024-972e-4c3261a22325-logs\") pod \"barbican-api-fd86b57f4-94kch\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.230904 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.230989 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.232950 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-config\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.233000 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.236885 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.239508 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.251206 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-kube-api-access-vzn5m" (OuterVolumeSpecName: "kube-api-access-vzn5m") pod "39a2ab81-7e34-43bf-94ad-47a0452dbbfa" (UID: "39a2ab81-7e34-43bf-94ad-47a0452dbbfa"). InnerVolumeSpecName "kube-api-access-vzn5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.258861 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwwtx\" (UniqueName: \"kubernetes.io/projected/030f716d-d22a-4024-972e-4c3261a22325-kube-api-access-cwwtx\") pod \"barbican-api-fd86b57f4-94kch\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.260559 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-586bfddf5f-xk2jd" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.264417 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-config-data-custom\") pod \"barbican-api-fd86b57f4-94kch\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.264735 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-config-data\") pod \"barbican-api-fd86b57f4-94kch\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.265008 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.273473 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-combined-ca-bundle\") pod \"barbican-api-fd86b57f4-94kch\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.285791 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-872m4\" (UniqueName: \"kubernetes.io/projected/7ab25308-baab-4b92-8bbb-7525b0e96550-kube-api-access-872m4\") pod \"dnsmasq-dns-75c8ddd69c-mrl65\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.320573 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.331025 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzn5m\" (UniqueName: \"kubernetes.io/projected/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-kube-api-access-vzn5m\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.346971 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-config" (OuterVolumeSpecName: "config") pod "39a2ab81-7e34-43bf-94ad-47a0452dbbfa" (UID: "39a2ab81-7e34-43bf-94ad-47a0452dbbfa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.359459 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "39a2ab81-7e34-43bf-94ad-47a0452dbbfa" (UID: "39a2ab81-7e34-43bf-94ad-47a0452dbbfa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.420332 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "39a2ab81-7e34-43bf-94ad-47a0452dbbfa" (UID: "39a2ab81-7e34-43bf-94ad-47a0452dbbfa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.438331 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.438382 4678 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.438395 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.451230 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "39a2ab81-7e34-43bf-94ad-47a0452dbbfa" (UID: "39a2ab81-7e34-43bf-94ad-47a0452dbbfa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.463224 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "39a2ab81-7e34-43bf-94ad-47a0452dbbfa" (UID: "39a2ab81-7e34-43bf-94ad-47a0452dbbfa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.544702 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.544731 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39a2ab81-7e34-43bf-94ad-47a0452dbbfa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.571137 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.587321 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dnf2l" event={"ID":"3fbb2c05-03d0-41ad-b306-0d196383c147","Type":"ContainerStarted","Data":"50b408996eabd8bc0e5b0d4f53e3cb30296cb8743c1b755d2a615a76ed7f92a7"} Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.597515 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" event={"ID":"39a2ab81-7e34-43bf-94ad-47a0452dbbfa","Type":"ContainerDied","Data":"f5246dcff3ac200ee5e8177440c8452fc54ecea0b63a74d24e5331a8299788a7"} Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.597581 4678 scope.go:117] "RemoveContainer" containerID="3a47c31434a8727ba90b97cefe8e96a410dee7ea9b5df00c1be488ebc00c5df5" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.597776 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-2wzjt" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.605878 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"765f2f85-0026-4941-94d4-8fb2f913d46d","Type":"ContainerStarted","Data":"b6dfef16739a1c0717ae6be60c05ad9d28b7f218dfeb9c89f59a25e32dbf0a56"} Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.627797 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7f345f7d-85e6-4995-9706-3189c846de37","Type":"ContainerStarted","Data":"d1592d3c0704e915aa99caaa918a065fe376602b9337676499341f453395cf0f"} Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.628720 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-dnf2l" podStartSLOduration=3.032176023 podStartE2EDuration="42.628702928s" podCreationTimestamp="2025-11-24 11:36:59 +0000 UTC" firstStartedPulling="2025-11-24 11:37:01.161820055 +0000 UTC m=+1232.092879694" lastFinishedPulling="2025-11-24 11:37:40.75834696 +0000 UTC m=+1271.689406599" observedRunningTime="2025-11-24 11:37:41.619711438 +0000 UTC m=+1272.550771077" watchObservedRunningTime="2025-11-24 11:37:41.628702928 +0000 UTC m=+1272.559762567" Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.670779 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-2wzjt"] Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.691639 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-2wzjt"] Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.724834 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7cb75676bc-dmjv6"] Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.779793 4678 scope.go:117] "RemoveContainer" containerID="385f1401bdc7e40c51b780ae79a32ccb42d4f08183de60cb0656300539dce972" Nov 24 11:37:41 crc kubenswrapper[4678]: W1124 11:37:41.792078 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeed6b8b9_3443_42af_ab2e_b8695cf8b1e8.slice/crio-6b592e835878130865f3bb295fcee4e105024ebfb97577c2e8748e95d8e3e434 WatchSource:0}: Error finding container 6b592e835878130865f3bb295fcee4e105024ebfb97577c2e8748e95d8e3e434: Status 404 returned error can't find the container with id 6b592e835878130865f3bb295fcee4e105024ebfb97577c2e8748e95d8e3e434 Nov 24 11:37:41 crc kubenswrapper[4678]: I1124 11:37:41.988306 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39a2ab81-7e34-43bf-94ad-47a0452dbbfa" path="/var/lib/kubelet/pods/39a2ab81-7e34-43bf-94ad-47a0452dbbfa/volumes" Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:41.998062 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f4c4bbb96-gnmrh"] Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:42.259966 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6fcdf46c94-52rq9"] Nov 24 11:37:42 crc kubenswrapper[4678]: W1124 11:37:42.334951 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44457729_ea53_4b02_bb60_00cd81170d9b.slice/crio-2acaba43e2625cb637e51c362f1939f870e195cd6ae75a07665786d4dab66050 WatchSource:0}: Error finding container 2acaba43e2625cb637e51c362f1939f870e195cd6ae75a07665786d4dab66050: Status 404 returned error can't find the container with id 2acaba43e2625cb637e51c362f1939f870e195cd6ae75a07665786d4dab66050 Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:42.369749 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-586bfddf5f-xk2jd"] Nov 24 11:37:42 crc kubenswrapper[4678]: W1124 11:37:42.404276 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea290c11_6cf3_425a_a5be_749d3563adaa.slice/crio-e14cb35c23203c085812c3cf99688c32dcc01c1a35949a52529daca484181c51 WatchSource:0}: Error finding container e14cb35c23203c085812c3cf99688c32dcc01c1a35949a52529daca484181c51: Status 404 returned error can't find the container with id e14cb35c23203c085812c3cf99688c32dcc01c1a35949a52529daca484181c51 Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:42.516776 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mrl65"] Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:42.555122 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-fd86b57f4-94kch"] Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:42.680666 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f4c4bbb96-gnmrh" event={"ID":"18ccf264-50f3-476e-9640-1a4f3d23044f","Type":"ContainerStarted","Data":"afc7ee09c4a96059db76d094c972d51895be69d1b58882b7d7942ff1f0f0b418"} Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:42.680783 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f4c4bbb96-gnmrh" event={"ID":"18ccf264-50f3-476e-9640-1a4f3d23044f","Type":"ContainerStarted","Data":"3ddea0bb3b7981715b22980d670e58d7fb6dd0256c5c8f56d238fced51c1b3a8"} Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:42.684861 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd86b57f4-94kch" event={"ID":"030f716d-d22a-4024-972e-4c3261a22325","Type":"ContainerStarted","Data":"ff21ffd9a15748bcbc4abf68b3ab7e8965897935ec7a7de55ba59417f5c5470b"} Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:42.703385 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"765f2f85-0026-4941-94d4-8fb2f913d46d","Type":"ContainerStarted","Data":"bbbb678a73d3318e72aa080a75cb86ab2adb15dde463ab361994ee932d813da7"} Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:42.706676 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7cb75676bc-dmjv6" event={"ID":"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8","Type":"ContainerStarted","Data":"5ef97ec0150003d6b4411c919e8a4d5110736cdf640c7cd37f776f076d0284f2"} Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:42.706721 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7cb75676bc-dmjv6" event={"ID":"eed6b8b9-3443-42af-ab2e-b8695cf8b1e8","Type":"ContainerStarted","Data":"6b592e835878130865f3bb295fcee4e105024ebfb97577c2e8748e95d8e3e434"} Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:42.706938 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:42.717973 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" event={"ID":"7ab25308-baab-4b92-8bbb-7525b0e96550","Type":"ContainerStarted","Data":"b0e16f1eac9b87fe182e25da0289953778051dc913038ccdccfccb6ac3f01d45"} Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:42.727301 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" event={"ID":"44457729-ea53-4b02-bb60-00cd81170d9b","Type":"ContainerStarted","Data":"2acaba43e2625cb637e51c362f1939f870e195cd6ae75a07665786d4dab66050"} Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:42.728658 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-586bfddf5f-xk2jd" event={"ID":"ea290c11-6cf3-425a-a5be-749d3563adaa","Type":"ContainerStarted","Data":"e14cb35c23203c085812c3cf99688c32dcc01c1a35949a52529daca484181c51"} Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:42.769067 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.769050732 podStartE2EDuration="5.769050732s" podCreationTimestamp="2025-11-24 11:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:42.743092448 +0000 UTC m=+1273.674152097" watchObservedRunningTime="2025-11-24 11:37:42.769050732 +0000 UTC m=+1273.700110361" Nov 24 11:37:42 crc kubenswrapper[4678]: I1124 11:37:42.774066 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7cb75676bc-dmjv6" podStartSLOduration=2.774049686 podStartE2EDuration="2.774049686s" podCreationTimestamp="2025-11-24 11:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:42.768494017 +0000 UTC m=+1273.699553726" watchObservedRunningTime="2025-11-24 11:37:42.774049686 +0000 UTC m=+1273.705109325" Nov 24 11:37:43 crc kubenswrapper[4678]: I1124 11:37:43.796242 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f4c4bbb96-gnmrh" event={"ID":"18ccf264-50f3-476e-9640-1a4f3d23044f","Type":"ContainerStarted","Data":"05fe28c914875fc98399afe0fe1a478a9c2daf68de226f1223374a20baa0b355"} Nov 24 11:37:43 crc kubenswrapper[4678]: I1124 11:37:43.797379 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:43 crc kubenswrapper[4678]: I1124 11:37:43.807013 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd86b57f4-94kch" event={"ID":"030f716d-d22a-4024-972e-4c3261a22325","Type":"ContainerStarted","Data":"2295dfbf92071bbefdec8f6dd079bf549e47ca032a35bbd3169b546f0dd95f2b"} Nov 24 11:37:43 crc kubenswrapper[4678]: I1124 11:37:43.810751 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7f345f7d-85e6-4995-9706-3189c846de37","Type":"ContainerStarted","Data":"2cff626c73567e135858ecf12294fccf650580f251dadb3b6203f5992376d5eb"} Nov 24 11:37:43 crc kubenswrapper[4678]: I1124 11:37:43.812722 4678 generic.go:334] "Generic (PLEG): container finished" podID="7ab25308-baab-4b92-8bbb-7525b0e96550" containerID="247cd0024fb124946eca4e4e0b74c60ee385b0ac359bb90de64566a8cb7d3dff" exitCode=0 Nov 24 11:37:43 crc kubenswrapper[4678]: I1124 11:37:43.812841 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" event={"ID":"7ab25308-baab-4b92-8bbb-7525b0e96550","Type":"ContainerDied","Data":"247cd0024fb124946eca4e4e0b74c60ee385b0ac359bb90de64566a8cb7d3dff"} Nov 24 11:37:43 crc kubenswrapper[4678]: I1124 11:37:43.818280 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7f4c4bbb96-gnmrh" podStartSLOduration=3.818258939 podStartE2EDuration="3.818258939s" podCreationTimestamp="2025-11-24 11:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:43.816030871 +0000 UTC m=+1274.747090510" watchObservedRunningTime="2025-11-24 11:37:43.818258939 +0000 UTC m=+1274.749318578" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.703533 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-75f757b7cd-s6z6f"] Nov 24 11:37:44 crc kubenswrapper[4678]: E1124 11:37:44.705796 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a2ab81-7e34-43bf-94ad-47a0452dbbfa" containerName="dnsmasq-dns" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.705820 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a2ab81-7e34-43bf-94ad-47a0452dbbfa" containerName="dnsmasq-dns" Nov 24 11:37:44 crc kubenswrapper[4678]: E1124 11:37:44.705836 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a2ab81-7e34-43bf-94ad-47a0452dbbfa" containerName="init" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.705843 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a2ab81-7e34-43bf-94ad-47a0452dbbfa" containerName="init" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.706092 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="39a2ab81-7e34-43bf-94ad-47a0452dbbfa" containerName="dnsmasq-dns" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.707611 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.727022 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.727253 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.787007 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d8cb226-d8a1-44b9-8656-e04def590cdc-config-data\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.787348 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d8cb226-d8a1-44b9-8656-e04def590cdc-config-data-custom\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.787514 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d8cb226-d8a1-44b9-8656-e04def590cdc-public-tls-certs\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.787604 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d8cb226-d8a1-44b9-8656-e04def590cdc-logs\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.787648 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6vkk\" (UniqueName: \"kubernetes.io/projected/2d8cb226-d8a1-44b9-8656-e04def590cdc-kube-api-access-g6vkk\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.787732 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d8cb226-d8a1-44b9-8656-e04def590cdc-internal-tls-certs\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.787781 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d8cb226-d8a1-44b9-8656-e04def590cdc-combined-ca-bundle\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.806862 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75f757b7cd-s6z6f"] Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.863141 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd86b57f4-94kch" event={"ID":"030f716d-d22a-4024-972e-4c3261a22325","Type":"ContainerStarted","Data":"37af7d1a0fdf29615781e30f434a496572df6701f66931a06df1442e10094e93"} Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.864400 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.864427 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.866294 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7f345f7d-85e6-4995-9706-3189c846de37","Type":"ContainerStarted","Data":"038dc0117a7d44bae9e834a44cae568412e96683234bc63309ab8b8b1ff68f0f"} Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.870184 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" event={"ID":"7ab25308-baab-4b92-8bbb-7525b0e96550","Type":"ContainerStarted","Data":"c57561c73afcc17e6a1ae4fd758b441a2ef9c94b15120b868b6bc4c2a6b7e409"} Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.870276 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.871001 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.890981 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d8cb226-d8a1-44b9-8656-e04def590cdc-combined-ca-bundle\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.891055 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d8cb226-d8a1-44b9-8656-e04def590cdc-config-data\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.891073 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d8cb226-d8a1-44b9-8656-e04def590cdc-config-data-custom\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.891163 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d8cb226-d8a1-44b9-8656-e04def590cdc-public-tls-certs\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.891212 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d8cb226-d8a1-44b9-8656-e04def590cdc-logs\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.891242 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6vkk\" (UniqueName: \"kubernetes.io/projected/2d8cb226-d8a1-44b9-8656-e04def590cdc-kube-api-access-g6vkk\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.891284 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d8cb226-d8a1-44b9-8656-e04def590cdc-internal-tls-certs\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.894964 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d8cb226-d8a1-44b9-8656-e04def590cdc-logs\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.919225 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-fd86b57f4-94kch" podStartSLOduration=4.919207461 podStartE2EDuration="4.919207461s" podCreationTimestamp="2025-11-24 11:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:44.88288172 +0000 UTC m=+1275.813941359" watchObservedRunningTime="2025-11-24 11:37:44.919207461 +0000 UTC m=+1275.850267100" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.923608 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d8cb226-d8a1-44b9-8656-e04def590cdc-config-data-custom\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.923900 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d8cb226-d8a1-44b9-8656-e04def590cdc-config-data\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.924378 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6vkk\" (UniqueName: \"kubernetes.io/projected/2d8cb226-d8a1-44b9-8656-e04def590cdc-kube-api-access-g6vkk\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.925490 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d8cb226-d8a1-44b9-8656-e04def590cdc-internal-tls-certs\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.926248 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d8cb226-d8a1-44b9-8656-e04def590cdc-public-tls-certs\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.933726 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d8cb226-d8a1-44b9-8656-e04def590cdc-combined-ca-bundle\") pod \"barbican-api-75f757b7cd-s6z6f\" (UID: \"2d8cb226-d8a1-44b9-8656-e04def590cdc\") " pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.944350 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" podStartSLOduration=4.944334202 podStartE2EDuration="4.944334202s" podCreationTimestamp="2025-11-24 11:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:44.911307129 +0000 UTC m=+1275.842366768" watchObservedRunningTime="2025-11-24 11:37:44.944334202 +0000 UTC m=+1275.875393841" Nov 24 11:37:44 crc kubenswrapper[4678]: I1124 11:37:44.950175 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.9501624379999996 podStartE2EDuration="5.950162438s" podCreationTimestamp="2025-11-24 11:37:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:44.939476082 +0000 UTC m=+1275.870535741" watchObservedRunningTime="2025-11-24 11:37:44.950162438 +0000 UTC m=+1275.881222077" Nov 24 11:37:45 crc kubenswrapper[4678]: I1124 11:37:45.118010 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:45 crc kubenswrapper[4678]: I1124 11:37:45.916940 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-586bfddf5f-xk2jd" event={"ID":"ea290c11-6cf3-425a-a5be-749d3563adaa","Type":"ContainerStarted","Data":"cce924647cb5382c5a86a4a48a7b5a0e54f0da568500010a16bd2ca18954bfcd"} Nov 24 11:37:46 crc kubenswrapper[4678]: I1124 11:37:46.057057 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75f757b7cd-s6z6f"] Nov 24 11:37:46 crc kubenswrapper[4678]: W1124 11:37:46.079609 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d8cb226_d8a1_44b9_8656_e04def590cdc.slice/crio-5c3dcbee5c755c66779312086be40713b7eabc8b8994a586be79db8330722194 WatchSource:0}: Error finding container 5c3dcbee5c755c66779312086be40713b7eabc8b8994a586be79db8330722194: Status 404 returned error can't find the container with id 5c3dcbee5c755c66779312086be40713b7eabc8b8994a586be79db8330722194 Nov 24 11:37:46 crc kubenswrapper[4678]: I1124 11:37:46.936257 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f757b7cd-s6z6f" event={"ID":"2d8cb226-d8a1-44b9-8656-e04def590cdc","Type":"ContainerStarted","Data":"91ad38c05f87ef43c17fd9e34af09fa3c0a01840720eee3c02499302c7500282"} Nov 24 11:37:46 crc kubenswrapper[4678]: I1124 11:37:46.936650 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f757b7cd-s6z6f" event={"ID":"2d8cb226-d8a1-44b9-8656-e04def590cdc","Type":"ContainerStarted","Data":"b0a1dff1ab22655ec297db56937dc5b37b0c2d703190bacafba9d176f9af4695"} Nov 24 11:37:46 crc kubenswrapper[4678]: I1124 11:37:46.936661 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f757b7cd-s6z6f" event={"ID":"2d8cb226-d8a1-44b9-8656-e04def590cdc","Type":"ContainerStarted","Data":"5c3dcbee5c755c66779312086be40713b7eabc8b8994a586be79db8330722194"} Nov 24 11:37:46 crc kubenswrapper[4678]: I1124 11:37:46.936872 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:46 crc kubenswrapper[4678]: I1124 11:37:46.949977 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" event={"ID":"44457729-ea53-4b02-bb60-00cd81170d9b","Type":"ContainerStarted","Data":"b820c3fcc58b7a2ed94060d3d9c869a4cf41958cad16aa05165d3dee2d5f0d87"} Nov 24 11:37:46 crc kubenswrapper[4678]: I1124 11:37:46.950057 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" event={"ID":"44457729-ea53-4b02-bb60-00cd81170d9b","Type":"ContainerStarted","Data":"68de6c0099b475aedf8f9597c8d925306cdedd25a11e4c1bc9f5dddb9a5ab0fd"} Nov 24 11:37:46 crc kubenswrapper[4678]: I1124 11:37:46.952729 4678 generic.go:334] "Generic (PLEG): container finished" podID="3fbb2c05-03d0-41ad-b306-0d196383c147" containerID="50b408996eabd8bc0e5b0d4f53e3cb30296cb8743c1b755d2a615a76ed7f92a7" exitCode=0 Nov 24 11:37:46 crc kubenswrapper[4678]: I1124 11:37:46.952813 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dnf2l" event={"ID":"3fbb2c05-03d0-41ad-b306-0d196383c147","Type":"ContainerDied","Data":"50b408996eabd8bc0e5b0d4f53e3cb30296cb8743c1b755d2a615a76ed7f92a7"} Nov 24 11:37:46 crc kubenswrapper[4678]: I1124 11:37:46.960961 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-586bfddf5f-xk2jd" event={"ID":"ea290c11-6cf3-425a-a5be-749d3563adaa","Type":"ContainerStarted","Data":"ab886dd12a646e86350804dc4f50f11de178c81f1284d1cea4e67aab4823df2f"} Nov 24 11:37:46 crc kubenswrapper[4678]: I1124 11:37:46.970248 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-75f757b7cd-s6z6f" podStartSLOduration=2.970229559 podStartE2EDuration="2.970229559s" podCreationTimestamp="2025-11-24 11:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:37:46.95564084 +0000 UTC m=+1277.886700479" watchObservedRunningTime="2025-11-24 11:37:46.970229559 +0000 UTC m=+1277.901289198" Nov 24 11:37:46 crc kubenswrapper[4678]: I1124 11:37:46.994286 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6fcdf46c94-52rq9" podStartSLOduration=3.785549146 podStartE2EDuration="6.994269113s" podCreationTimestamp="2025-11-24 11:37:40 +0000 UTC" firstStartedPulling="2025-11-24 11:37:42.366120944 +0000 UTC m=+1273.297180583" lastFinishedPulling="2025-11-24 11:37:45.574840911 +0000 UTC m=+1276.505900550" observedRunningTime="2025-11-24 11:37:46.992476254 +0000 UTC m=+1277.923535913" watchObservedRunningTime="2025-11-24 11:37:46.994269113 +0000 UTC m=+1277.925328752" Nov 24 11:37:47 crc kubenswrapper[4678]: I1124 11:37:47.013419 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-586bfddf5f-xk2jd" podStartSLOduration=3.8963449470000002 podStartE2EDuration="7.013399693s" podCreationTimestamp="2025-11-24 11:37:40 +0000 UTC" firstStartedPulling="2025-11-24 11:37:42.456641474 +0000 UTC m=+1273.387701113" lastFinishedPulling="2025-11-24 11:37:45.57369622 +0000 UTC m=+1276.504755859" observedRunningTime="2025-11-24 11:37:47.011573775 +0000 UTC m=+1277.942633424" watchObservedRunningTime="2025-11-24 11:37:47.013399693 +0000 UTC m=+1277.944459332" Nov 24 11:37:47 crc kubenswrapper[4678]: I1124 11:37:47.749617 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 11:37:47 crc kubenswrapper[4678]: I1124 11:37:47.750031 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 11:37:47 crc kubenswrapper[4678]: I1124 11:37:47.790193 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 11:37:47 crc kubenswrapper[4678]: I1124 11:37:47.804197 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 11:37:47 crc kubenswrapper[4678]: I1124 11:37:47.972715 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 11:37:47 crc kubenswrapper[4678]: I1124 11:37:47.972844 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:47 crc kubenswrapper[4678]: I1124 11:37:47.972859 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 11:37:48 crc kubenswrapper[4678]: I1124 11:37:48.895945 4678 scope.go:117] "RemoveContainer" containerID="e93d74ee7292fb227a0be57ff42c7304c1ab24b81f43c42900ffe41aac64025c" Nov 24 11:37:49 crc kubenswrapper[4678]: I1124 11:37:49.671407 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dnf2l" Nov 24 11:37:49 crc kubenswrapper[4678]: I1124 11:37:49.708262 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbb2c05-03d0-41ad-b306-0d196383c147-combined-ca-bundle\") pod \"3fbb2c05-03d0-41ad-b306-0d196383c147\" (UID: \"3fbb2c05-03d0-41ad-b306-0d196383c147\") " Nov 24 11:37:49 crc kubenswrapper[4678]: I1124 11:37:49.708353 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nklz8\" (UniqueName: \"kubernetes.io/projected/3fbb2c05-03d0-41ad-b306-0d196383c147-kube-api-access-nklz8\") pod \"3fbb2c05-03d0-41ad-b306-0d196383c147\" (UID: \"3fbb2c05-03d0-41ad-b306-0d196383c147\") " Nov 24 11:37:49 crc kubenswrapper[4678]: I1124 11:37:49.708743 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbb2c05-03d0-41ad-b306-0d196383c147-config-data\") pod \"3fbb2c05-03d0-41ad-b306-0d196383c147\" (UID: \"3fbb2c05-03d0-41ad-b306-0d196383c147\") " Nov 24 11:37:49 crc kubenswrapper[4678]: I1124 11:37:49.757875 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fbb2c05-03d0-41ad-b306-0d196383c147-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fbb2c05-03d0-41ad-b306-0d196383c147" (UID: "3fbb2c05-03d0-41ad-b306-0d196383c147"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:49 crc kubenswrapper[4678]: I1124 11:37:49.757962 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fbb2c05-03d0-41ad-b306-0d196383c147-kube-api-access-nklz8" (OuterVolumeSpecName: "kube-api-access-nklz8") pod "3fbb2c05-03d0-41ad-b306-0d196383c147" (UID: "3fbb2c05-03d0-41ad-b306-0d196383c147"). InnerVolumeSpecName "kube-api-access-nklz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:49 crc kubenswrapper[4678]: I1124 11:37:49.810900 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbb2c05-03d0-41ad-b306-0d196383c147-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:49 crc kubenswrapper[4678]: I1124 11:37:49.810940 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nklz8\" (UniqueName: \"kubernetes.io/projected/3fbb2c05-03d0-41ad-b306-0d196383c147-kube-api-access-nklz8\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:49 crc kubenswrapper[4678]: I1124 11:37:49.814414 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fbb2c05-03d0-41ad-b306-0d196383c147-config-data" (OuterVolumeSpecName: "config-data") pod "3fbb2c05-03d0-41ad-b306-0d196383c147" (UID: "3fbb2c05-03d0-41ad-b306-0d196383c147"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:49 crc kubenswrapper[4678]: I1124 11:37:49.913247 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbb2c05-03d0-41ad-b306-0d196383c147-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:49 crc kubenswrapper[4678]: I1124 11:37:49.997470 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dnf2l" event={"ID":"3fbb2c05-03d0-41ad-b306-0d196383c147","Type":"ContainerDied","Data":"b4aef3a567aaa555c5cefca7e5a904a971aca4106e37c0660a4c2c74a7593955"} Nov 24 11:37:49 crc kubenswrapper[4678]: I1124 11:37:49.997509 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4aef3a567aaa555c5cefca7e5a904a971aca4106e37c0660a4c2c74a7593955" Nov 24 11:37:49 crc kubenswrapper[4678]: I1124 11:37:49.997521 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dnf2l" Nov 24 11:37:50 crc kubenswrapper[4678]: I1124 11:37:50.170063 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 11:37:50 crc kubenswrapper[4678]: I1124 11:37:50.170329 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 11:37:50 crc kubenswrapper[4678]: I1124 11:37:50.213814 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 11:37:50 crc kubenswrapper[4678]: I1124 11:37:50.218905 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 11:37:51 crc kubenswrapper[4678]: I1124 11:37:51.006691 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 11:37:51 crc kubenswrapper[4678]: I1124 11:37:51.006747 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 11:37:51 crc kubenswrapper[4678]: E1124 11:37:51.461538 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="26fa8015-2aea-4aaf-baaf-bdcc15096441" Nov 24 11:37:51 crc kubenswrapper[4678]: I1124 11:37:51.572896 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:37:51 crc kubenswrapper[4678]: I1124 11:37:51.652312 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-hzwmf"] Nov 24 11:37:51 crc kubenswrapper[4678]: I1124 11:37:51.652584 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" podUID="e132b2d4-c6a9-4283-84aa-11a1214092e6" containerName="dnsmasq-dns" containerID="cri-o://282b111a1eda2607770fb9a604e083b3f14169dffab117b8f1e8aa0a3867092b" gracePeriod=10 Nov 24 11:37:51 crc kubenswrapper[4678]: I1124 11:37:51.739968 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 11:37:51 crc kubenswrapper[4678]: I1124 11:37:51.740147 4678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:37:51 crc kubenswrapper[4678]: I1124 11:37:51.774171 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.042061 4678 generic.go:334] "Generic (PLEG): container finished" podID="e132b2d4-c6a9-4283-84aa-11a1214092e6" containerID="282b111a1eda2607770fb9a604e083b3f14169dffab117b8f1e8aa0a3867092b" exitCode=0 Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.042418 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" event={"ID":"e132b2d4-c6a9-4283-84aa-11a1214092e6","Type":"ContainerDied","Data":"282b111a1eda2607770fb9a604e083b3f14169dffab117b8f1e8aa0a3867092b"} Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.055352 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85857bf94-wpbc7_b249aa27-98b1-40ce-85ab-5b7d0a8edf15/neutron-httpd/2.log" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.058127 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85857bf94-wpbc7_b249aa27-98b1-40ce-85ab-5b7d0a8edf15/neutron-httpd/1.log" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.061512 4678 generic.go:334] "Generic (PLEG): container finished" podID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerID="d567d5591efd267f5c56d60a89cae35438ee02e5dfef75b7482d21285b77ae32" exitCode=1 Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.061586 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85857bf94-wpbc7" event={"ID":"b249aa27-98b1-40ce-85ab-5b7d0a8edf15","Type":"ContainerDied","Data":"d567d5591efd267f5c56d60a89cae35438ee02e5dfef75b7482d21285b77ae32"} Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.061622 4678 scope.go:117] "RemoveContainer" containerID="e93d74ee7292fb227a0be57ff42c7304c1ab24b81f43c42900ffe41aac64025c" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.062414 4678 scope.go:117] "RemoveContainer" containerID="d567d5591efd267f5c56d60a89cae35438ee02e5dfef75b7482d21285b77ae32" Nov 24 11:37:52 crc kubenswrapper[4678]: E1124 11:37:52.063074 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-85857bf94-wpbc7_openstack(b249aa27-98b1-40ce-85ab-5b7d0a8edf15)\"" pod="openstack/neutron-85857bf94-wpbc7" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.123011 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="26fa8015-2aea-4aaf-baaf-bdcc15096441" containerName="ceilometer-notification-agent" containerID="cri-o://850fd3ff4a08b1d1eb6ba195707aa3ac607e29928bb1429d739b9ab53df4288f" gracePeriod=30 Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.123325 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26fa8015-2aea-4aaf-baaf-bdcc15096441","Type":"ContainerStarted","Data":"91063aceb559839eae50da8123c377365fb4267a4b3bcc629f293cfc880a6f48"} Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.123869 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.124181 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="26fa8015-2aea-4aaf-baaf-bdcc15096441" containerName="proxy-httpd" containerID="cri-o://91063aceb559839eae50da8123c377365fb4267a4b3bcc629f293cfc880a6f48" gracePeriod=30 Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.124244 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="26fa8015-2aea-4aaf-baaf-bdcc15096441" containerName="sg-core" containerID="cri-o://5c43fd958f24e3e400ad433b9b33e6b6e30b2210cc5822704c115f0f59abfbab" gracePeriod=30 Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.545991 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.622678 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-dns-svc\") pod \"e132b2d4-c6a9-4283-84aa-11a1214092e6\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.622896 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-config\") pod \"e132b2d4-c6a9-4283-84aa-11a1214092e6\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.622960 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wtds\" (UniqueName: \"kubernetes.io/projected/e132b2d4-c6a9-4283-84aa-11a1214092e6-kube-api-access-2wtds\") pod \"e132b2d4-c6a9-4283-84aa-11a1214092e6\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.622993 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-ovsdbserver-nb\") pod \"e132b2d4-c6a9-4283-84aa-11a1214092e6\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.623022 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-ovsdbserver-sb\") pod \"e132b2d4-c6a9-4283-84aa-11a1214092e6\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.623165 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-dns-swift-storage-0\") pod \"e132b2d4-c6a9-4283-84aa-11a1214092e6\" (UID: \"e132b2d4-c6a9-4283-84aa-11a1214092e6\") " Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.634869 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e132b2d4-c6a9-4283-84aa-11a1214092e6-kube-api-access-2wtds" (OuterVolumeSpecName: "kube-api-access-2wtds") pod "e132b2d4-c6a9-4283-84aa-11a1214092e6" (UID: "e132b2d4-c6a9-4283-84aa-11a1214092e6"). InnerVolumeSpecName "kube-api-access-2wtds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.705008 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e132b2d4-c6a9-4283-84aa-11a1214092e6" (UID: "e132b2d4-c6a9-4283-84aa-11a1214092e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.705246 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-config" (OuterVolumeSpecName: "config") pod "e132b2d4-c6a9-4283-84aa-11a1214092e6" (UID: "e132b2d4-c6a9-4283-84aa-11a1214092e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.722543 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e132b2d4-c6a9-4283-84aa-11a1214092e6" (UID: "e132b2d4-c6a9-4283-84aa-11a1214092e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.726119 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.726149 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wtds\" (UniqueName: \"kubernetes.io/projected/e132b2d4-c6a9-4283-84aa-11a1214092e6-kube-api-access-2wtds\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.726158 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.726166 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.746216 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e132b2d4-c6a9-4283-84aa-11a1214092e6" (UID: "e132b2d4-c6a9-4283-84aa-11a1214092e6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.747951 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e132b2d4-c6a9-4283-84aa-11a1214092e6" (UID: "e132b2d4-c6a9-4283-84aa-11a1214092e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.828101 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:52 crc kubenswrapper[4678]: I1124 11:37:52.828138 4678 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e132b2d4-c6a9-4283-84aa-11a1214092e6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:53 crc kubenswrapper[4678]: I1124 11:37:53.242064 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qx8wj" event={"ID":"7bf1a661-b2a3-458a-b504-2cac3277bd5d","Type":"ContainerStarted","Data":"fd801e7934b8b5b53e0087782f79fb2cb2fd75161e513e16d04c1cd04384df99"} Nov 24 11:37:53 crc kubenswrapper[4678]: I1124 11:37:53.296405 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" event={"ID":"e132b2d4-c6a9-4283-84aa-11a1214092e6","Type":"ContainerDied","Data":"c13764ffd4660039cb87493f1a45e93375f9777db615259c648a70c5bf48e6b7"} Nov 24 11:37:53 crc kubenswrapper[4678]: I1124 11:37:53.296462 4678 scope.go:117] "RemoveContainer" containerID="282b111a1eda2607770fb9a604e083b3f14169dffab117b8f1e8aa0a3867092b" Nov 24 11:37:53 crc kubenswrapper[4678]: I1124 11:37:53.296625 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-hzwmf" Nov 24 11:37:53 crc kubenswrapper[4678]: I1124 11:37:53.357255 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qx8wj" podStartSLOduration=4.799389243 podStartE2EDuration="54.357220349s" podCreationTimestamp="2025-11-24 11:36:59 +0000 UTC" firstStartedPulling="2025-11-24 11:37:01.581642988 +0000 UTC m=+1232.512702627" lastFinishedPulling="2025-11-24 11:37:51.139474094 +0000 UTC m=+1282.070533733" observedRunningTime="2025-11-24 11:37:53.307049808 +0000 UTC m=+1284.238109447" watchObservedRunningTime="2025-11-24 11:37:53.357220349 +0000 UTC m=+1284.288279988" Nov 24 11:37:53 crc kubenswrapper[4678]: I1124 11:37:53.360774 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85857bf94-wpbc7_b249aa27-98b1-40ce-85ab-5b7d0a8edf15/neutron-httpd/2.log" Nov 24 11:37:53 crc kubenswrapper[4678]: I1124 11:37:53.372906 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-hzwmf"] Nov 24 11:37:53 crc kubenswrapper[4678]: I1124 11:37:53.383318 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-hzwmf"] Nov 24 11:37:53 crc kubenswrapper[4678]: I1124 11:37:53.410263 4678 generic.go:334] "Generic (PLEG): container finished" podID="26fa8015-2aea-4aaf-baaf-bdcc15096441" containerID="91063aceb559839eae50da8123c377365fb4267a4b3bcc629f293cfc880a6f48" exitCode=0 Nov 24 11:37:53 crc kubenswrapper[4678]: I1124 11:37:53.410296 4678 generic.go:334] "Generic (PLEG): container finished" podID="26fa8015-2aea-4aaf-baaf-bdcc15096441" containerID="5c43fd958f24e3e400ad433b9b33e6b6e30b2210cc5822704c115f0f59abfbab" exitCode=2 Nov 24 11:37:53 crc kubenswrapper[4678]: I1124 11:37:53.410317 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26fa8015-2aea-4aaf-baaf-bdcc15096441","Type":"ContainerDied","Data":"91063aceb559839eae50da8123c377365fb4267a4b3bcc629f293cfc880a6f48"} Nov 24 11:37:53 crc kubenswrapper[4678]: I1124 11:37:53.410344 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26fa8015-2aea-4aaf-baaf-bdcc15096441","Type":"ContainerDied","Data":"5c43fd958f24e3e400ad433b9b33e6b6e30b2210cc5822704c115f0f59abfbab"} Nov 24 11:37:53 crc kubenswrapper[4678]: I1124 11:37:53.483422 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:53 crc kubenswrapper[4678]: I1124 11:37:53.603913 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:37:53 crc kubenswrapper[4678]: I1124 11:37:53.907174 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e132b2d4-c6a9-4283-84aa-11a1214092e6" path="/var/lib/kubelet/pods/e132b2d4-c6a9-4283-84aa-11a1214092e6/volumes" Nov 24 11:37:54 crc kubenswrapper[4678]: I1124 11:37:54.185224 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 11:37:54 crc kubenswrapper[4678]: I1124 11:37:54.185361 4678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:37:54 crc kubenswrapper[4678]: I1124 11:37:54.187826 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 11:37:54 crc kubenswrapper[4678]: I1124 11:37:54.280422 4678 scope.go:117] "RemoveContainer" containerID="0f225c2a8aef5b34c6dc016f4c7de590a7612431a21f2a480e9c4908d21e645e" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.065920 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.116796 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26fa8015-2aea-4aaf-baaf-bdcc15096441-log-httpd\") pod \"26fa8015-2aea-4aaf-baaf-bdcc15096441\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.116883 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-scripts\") pod \"26fa8015-2aea-4aaf-baaf-bdcc15096441\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.116932 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-sg-core-conf-yaml\") pod \"26fa8015-2aea-4aaf-baaf-bdcc15096441\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.117058 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9k2q\" (UniqueName: \"kubernetes.io/projected/26fa8015-2aea-4aaf-baaf-bdcc15096441-kube-api-access-b9k2q\") pod \"26fa8015-2aea-4aaf-baaf-bdcc15096441\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.117074 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26fa8015-2aea-4aaf-baaf-bdcc15096441-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "26fa8015-2aea-4aaf-baaf-bdcc15096441" (UID: "26fa8015-2aea-4aaf-baaf-bdcc15096441"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.117128 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26fa8015-2aea-4aaf-baaf-bdcc15096441-run-httpd\") pod \"26fa8015-2aea-4aaf-baaf-bdcc15096441\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.117482 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-config-data\") pod \"26fa8015-2aea-4aaf-baaf-bdcc15096441\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.117514 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-combined-ca-bundle\") pod \"26fa8015-2aea-4aaf-baaf-bdcc15096441\" (UID: \"26fa8015-2aea-4aaf-baaf-bdcc15096441\") " Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.118039 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26fa8015-2aea-4aaf-baaf-bdcc15096441-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "26fa8015-2aea-4aaf-baaf-bdcc15096441" (UID: "26fa8015-2aea-4aaf-baaf-bdcc15096441"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.118996 4678 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26fa8015-2aea-4aaf-baaf-bdcc15096441-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.119025 4678 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26fa8015-2aea-4aaf-baaf-bdcc15096441-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.124864 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-scripts" (OuterVolumeSpecName: "scripts") pod "26fa8015-2aea-4aaf-baaf-bdcc15096441" (UID: "26fa8015-2aea-4aaf-baaf-bdcc15096441"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.125114 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26fa8015-2aea-4aaf-baaf-bdcc15096441-kube-api-access-b9k2q" (OuterVolumeSpecName: "kube-api-access-b9k2q") pod "26fa8015-2aea-4aaf-baaf-bdcc15096441" (UID: "26fa8015-2aea-4aaf-baaf-bdcc15096441"). InnerVolumeSpecName "kube-api-access-b9k2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.163739 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "26fa8015-2aea-4aaf-baaf-bdcc15096441" (UID: "26fa8015-2aea-4aaf-baaf-bdcc15096441"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.220688 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.220974 4678 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.221046 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9k2q\" (UniqueName: \"kubernetes.io/projected/26fa8015-2aea-4aaf-baaf-bdcc15096441-kube-api-access-b9k2q\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.228782 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-config-data" (OuterVolumeSpecName: "config-data") pod "26fa8015-2aea-4aaf-baaf-bdcc15096441" (UID: "26fa8015-2aea-4aaf-baaf-bdcc15096441"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.260763 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26fa8015-2aea-4aaf-baaf-bdcc15096441" (UID: "26fa8015-2aea-4aaf-baaf-bdcc15096441"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.322977 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.323011 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fa8015-2aea-4aaf-baaf-bdcc15096441-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.450643 4678 generic.go:334] "Generic (PLEG): container finished" podID="26fa8015-2aea-4aaf-baaf-bdcc15096441" containerID="850fd3ff4a08b1d1eb6ba195707aa3ac607e29928bb1429d739b9ab53df4288f" exitCode=0 Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.450727 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.450727 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26fa8015-2aea-4aaf-baaf-bdcc15096441","Type":"ContainerDied","Data":"850fd3ff4a08b1d1eb6ba195707aa3ac607e29928bb1429d739b9ab53df4288f"} Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.451450 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26fa8015-2aea-4aaf-baaf-bdcc15096441","Type":"ContainerDied","Data":"02f9e6f27545e901f4b27600d7c3a1ac102724ed00a2760cf11c8dbc0b4d47a4"} Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.451489 4678 scope.go:117] "RemoveContainer" containerID="91063aceb559839eae50da8123c377365fb4267a4b3bcc629f293cfc880a6f48" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.492234 4678 scope.go:117] "RemoveContainer" containerID="5c43fd958f24e3e400ad433b9b33e6b6e30b2210cc5822704c115f0f59abfbab" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.530261 4678 scope.go:117] "RemoveContainer" containerID="850fd3ff4a08b1d1eb6ba195707aa3ac607e29928bb1429d739b9ab53df4288f" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.557653 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.573657 4678 scope.go:117] "RemoveContainer" containerID="91063aceb559839eae50da8123c377365fb4267a4b3bcc629f293cfc880a6f48" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.581992 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:37:55 crc kubenswrapper[4678]: E1124 11:37:55.585808 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91063aceb559839eae50da8123c377365fb4267a4b3bcc629f293cfc880a6f48\": container with ID starting with 91063aceb559839eae50da8123c377365fb4267a4b3bcc629f293cfc880a6f48 not found: ID does not exist" containerID="91063aceb559839eae50da8123c377365fb4267a4b3bcc629f293cfc880a6f48" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.585864 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91063aceb559839eae50da8123c377365fb4267a4b3bcc629f293cfc880a6f48"} err="failed to get container status \"91063aceb559839eae50da8123c377365fb4267a4b3bcc629f293cfc880a6f48\": rpc error: code = NotFound desc = could not find container \"91063aceb559839eae50da8123c377365fb4267a4b3bcc629f293cfc880a6f48\": container with ID starting with 91063aceb559839eae50da8123c377365fb4267a4b3bcc629f293cfc880a6f48 not found: ID does not exist" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.585891 4678 scope.go:117] "RemoveContainer" containerID="5c43fd958f24e3e400ad433b9b33e6b6e30b2210cc5822704c115f0f59abfbab" Nov 24 11:37:55 crc kubenswrapper[4678]: E1124 11:37:55.591851 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c43fd958f24e3e400ad433b9b33e6b6e30b2210cc5822704c115f0f59abfbab\": container with ID starting with 5c43fd958f24e3e400ad433b9b33e6b6e30b2210cc5822704c115f0f59abfbab not found: ID does not exist" containerID="5c43fd958f24e3e400ad433b9b33e6b6e30b2210cc5822704c115f0f59abfbab" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.591918 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c43fd958f24e3e400ad433b9b33e6b6e30b2210cc5822704c115f0f59abfbab"} err="failed to get container status \"5c43fd958f24e3e400ad433b9b33e6b6e30b2210cc5822704c115f0f59abfbab\": rpc error: code = NotFound desc = could not find container \"5c43fd958f24e3e400ad433b9b33e6b6e30b2210cc5822704c115f0f59abfbab\": container with ID starting with 5c43fd958f24e3e400ad433b9b33e6b6e30b2210cc5822704c115f0f59abfbab not found: ID does not exist" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.591952 4678 scope.go:117] "RemoveContainer" containerID="850fd3ff4a08b1d1eb6ba195707aa3ac607e29928bb1429d739b9ab53df4288f" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.592729 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:37:55 crc kubenswrapper[4678]: E1124 11:37:55.593262 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26fa8015-2aea-4aaf-baaf-bdcc15096441" containerName="ceilometer-notification-agent" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.593283 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fa8015-2aea-4aaf-baaf-bdcc15096441" containerName="ceilometer-notification-agent" Nov 24 11:37:55 crc kubenswrapper[4678]: E1124 11:37:55.593302 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e132b2d4-c6a9-4283-84aa-11a1214092e6" containerName="dnsmasq-dns" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.593310 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e132b2d4-c6a9-4283-84aa-11a1214092e6" containerName="dnsmasq-dns" Nov 24 11:37:55 crc kubenswrapper[4678]: E1124 11:37:55.593325 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e132b2d4-c6a9-4283-84aa-11a1214092e6" containerName="init" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.593332 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e132b2d4-c6a9-4283-84aa-11a1214092e6" containerName="init" Nov 24 11:37:55 crc kubenswrapper[4678]: E1124 11:37:55.593349 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26fa8015-2aea-4aaf-baaf-bdcc15096441" containerName="proxy-httpd" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.593355 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fa8015-2aea-4aaf-baaf-bdcc15096441" containerName="proxy-httpd" Nov 24 11:37:55 crc kubenswrapper[4678]: E1124 11:37:55.593368 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fbb2c05-03d0-41ad-b306-0d196383c147" containerName="heat-db-sync" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.593375 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fbb2c05-03d0-41ad-b306-0d196383c147" containerName="heat-db-sync" Nov 24 11:37:55 crc kubenswrapper[4678]: E1124 11:37:55.593396 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26fa8015-2aea-4aaf-baaf-bdcc15096441" containerName="sg-core" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.593401 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fa8015-2aea-4aaf-baaf-bdcc15096441" containerName="sg-core" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.593601 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="26fa8015-2aea-4aaf-baaf-bdcc15096441" containerName="ceilometer-notification-agent" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.593616 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="26fa8015-2aea-4aaf-baaf-bdcc15096441" containerName="proxy-httpd" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.593636 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fbb2c05-03d0-41ad-b306-0d196383c147" containerName="heat-db-sync" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.593645 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="26fa8015-2aea-4aaf-baaf-bdcc15096441" containerName="sg-core" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.593660 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="e132b2d4-c6a9-4283-84aa-11a1214092e6" containerName="dnsmasq-dns" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.595605 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: E1124 11:37:55.596992 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"850fd3ff4a08b1d1eb6ba195707aa3ac607e29928bb1429d739b9ab53df4288f\": container with ID starting with 850fd3ff4a08b1d1eb6ba195707aa3ac607e29928bb1429d739b9ab53df4288f not found: ID does not exist" containerID="850fd3ff4a08b1d1eb6ba195707aa3ac607e29928bb1429d739b9ab53df4288f" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.597072 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"850fd3ff4a08b1d1eb6ba195707aa3ac607e29928bb1429d739b9ab53df4288f"} err="failed to get container status \"850fd3ff4a08b1d1eb6ba195707aa3ac607e29928bb1429d739b9ab53df4288f\": rpc error: code = NotFound desc = could not find container \"850fd3ff4a08b1d1eb6ba195707aa3ac607e29928bb1429d739b9ab53df4288f\": container with ID starting with 850fd3ff4a08b1d1eb6ba195707aa3ac607e29928bb1429d739b9ab53df4288f not found: ID does not exist" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.598391 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.599503 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.613488 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.741272 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-config-data\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.741477 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.741519 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-scripts\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.741566 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.741612 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j44vw\" (UniqueName: \"kubernetes.io/projected/257bbe91-8baa-435d-9caf-a4945285bfe7-kube-api-access-j44vw\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.741697 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/257bbe91-8baa-435d-9caf-a4945285bfe7-log-httpd\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.741838 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/257bbe91-8baa-435d-9caf-a4945285bfe7-run-httpd\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.863179 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/257bbe91-8baa-435d-9caf-a4945285bfe7-log-httpd\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.863363 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/257bbe91-8baa-435d-9caf-a4945285bfe7-run-httpd\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.863426 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-config-data\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.863581 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.863655 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-scripts\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.863743 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.863813 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j44vw\" (UniqueName: \"kubernetes.io/projected/257bbe91-8baa-435d-9caf-a4945285bfe7-kube-api-access-j44vw\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.864319 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/257bbe91-8baa-435d-9caf-a4945285bfe7-log-httpd\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.864361 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/257bbe91-8baa-435d-9caf-a4945285bfe7-run-httpd\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.870370 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-scripts\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.871811 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.872716 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.874078 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-config-data\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.882220 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j44vw\" (UniqueName: \"kubernetes.io/projected/257bbe91-8baa-435d-9caf-a4945285bfe7-kube-api-access-j44vw\") pod \"ceilometer-0\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " pod="openstack/ceilometer-0" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.910217 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26fa8015-2aea-4aaf-baaf-bdcc15096441" path="/var/lib/kubelet/pods/26fa8015-2aea-4aaf-baaf-bdcc15096441/volumes" Nov 24 11:37:55 crc kubenswrapper[4678]: I1124 11:37:55.924696 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:37:56 crc kubenswrapper[4678]: I1124 11:37:56.417525 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:37:56 crc kubenswrapper[4678]: W1124 11:37:56.418124 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod257bbe91_8baa_435d_9caf_a4945285bfe7.slice/crio-ba37ab0e06262641c8526426dfbba121e3117ac2e052d30c28588888e65eb7eb WatchSource:0}: Error finding container ba37ab0e06262641c8526426dfbba121e3117ac2e052d30c28588888e65eb7eb: Status 404 returned error can't find the container with id ba37ab0e06262641c8526426dfbba121e3117ac2e052d30c28588888e65eb7eb Nov 24 11:37:56 crc kubenswrapper[4678]: I1124 11:37:56.465618 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"257bbe91-8baa-435d-9caf-a4945285bfe7","Type":"ContainerStarted","Data":"ba37ab0e06262641c8526426dfbba121e3117ac2e052d30c28588888e65eb7eb"} Nov 24 11:37:56 crc kubenswrapper[4678]: I1124 11:37:56.729966 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:56 crc kubenswrapper[4678]: I1124 11:37:56.912698 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75f757b7cd-s6z6f" Nov 24 11:37:56 crc kubenswrapper[4678]: I1124 11:37:56.983498 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-fd86b57f4-94kch"] Nov 24 11:37:56 crc kubenswrapper[4678]: I1124 11:37:56.983746 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-fd86b57f4-94kch" podUID="030f716d-d22a-4024-972e-4c3261a22325" containerName="barbican-api-log" containerID="cri-o://2295dfbf92071bbefdec8f6dd079bf549e47ca032a35bbd3169b546f0dd95f2b" gracePeriod=30 Nov 24 11:37:56 crc kubenswrapper[4678]: I1124 11:37:56.984177 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-fd86b57f4-94kch" podUID="030f716d-d22a-4024-972e-4c3261a22325" containerName="barbican-api" containerID="cri-o://37af7d1a0fdf29615781e30f434a496572df6701f66931a06df1442e10094e93" gracePeriod=30 Nov 24 11:37:56 crc kubenswrapper[4678]: I1124 11:37:56.990985 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-fd86b57f4-94kch" podUID="030f716d-d22a-4024-972e-4c3261a22325" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.197:9311/healthcheck\": EOF" Nov 24 11:37:57 crc kubenswrapper[4678]: I1124 11:37:57.477166 4678 generic.go:334] "Generic (PLEG): container finished" podID="030f716d-d22a-4024-972e-4c3261a22325" containerID="2295dfbf92071bbefdec8f6dd079bf549e47ca032a35bbd3169b546f0dd95f2b" exitCode=143 Nov 24 11:37:57 crc kubenswrapper[4678]: I1124 11:37:57.477255 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd86b57f4-94kch" event={"ID":"030f716d-d22a-4024-972e-4c3261a22325","Type":"ContainerDied","Data":"2295dfbf92071bbefdec8f6dd079bf549e47ca032a35bbd3169b546f0dd95f2b"} Nov 24 11:37:57 crc kubenswrapper[4678]: I1124 11:37:57.479056 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"257bbe91-8baa-435d-9caf-a4945285bfe7","Type":"ContainerStarted","Data":"a3765922381eee06bd9373b751beef7e24af803256a4fa9caa96454b76fbcce7"} Nov 24 11:37:58 crc kubenswrapper[4678]: I1124 11:37:58.497479 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"257bbe91-8baa-435d-9caf-a4945285bfe7","Type":"ContainerStarted","Data":"1669000afb61c176dfad96e3665f4482d795f885089614888d8f7c4b8b4f9ec5"} Nov 24 11:37:59 crc kubenswrapper[4678]: I1124 11:37:59.512659 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"257bbe91-8baa-435d-9caf-a4945285bfe7","Type":"ContainerStarted","Data":"e2b5bfe3d63d2ddb3c6a30ab3e324e5ac3211f9c16ce7130b9842c31e7e870ac"} Nov 24 11:37:59 crc kubenswrapper[4678]: I1124 11:37:59.514555 4678 generic.go:334] "Generic (PLEG): container finished" podID="7bf1a661-b2a3-458a-b504-2cac3277bd5d" containerID="fd801e7934b8b5b53e0087782f79fb2cb2fd75161e513e16d04c1cd04384df99" exitCode=0 Nov 24 11:37:59 crc kubenswrapper[4678]: I1124 11:37:59.514589 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qx8wj" event={"ID":"7bf1a661-b2a3-458a-b504-2cac3277bd5d","Type":"ContainerDied","Data":"fd801e7934b8b5b53e0087782f79fb2cb2fd75161e513e16d04c1cd04384df99"} Nov 24 11:38:00 crc kubenswrapper[4678]: I1124 11:38:00.092807 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:38:00 crc kubenswrapper[4678]: I1124 11:38:00.093235 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:38:00 crc kubenswrapper[4678]: I1124 11:38:00.095915 4678 scope.go:117] "RemoveContainer" containerID="d567d5591efd267f5c56d60a89cae35438ee02e5dfef75b7482d21285b77ae32" Nov 24 11:38:00 crc kubenswrapper[4678]: I1124 11:38:00.096799 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-85857bf94-wpbc7" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerName="neutron-api" probeResult="failure" output="Get \"http://10.217.0.188:9696/\": dial tcp 10.217.0.188:9696: connect: connection refused" Nov 24 11:38:00 crc kubenswrapper[4678]: E1124 11:38:00.096855 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-85857bf94-wpbc7_openstack(b249aa27-98b1-40ce-85ab-5b7d0a8edf15)\"" pod="openstack/neutron-85857bf94-wpbc7" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" Nov 24 11:38:00 crc kubenswrapper[4678]: I1124 11:38:00.297042 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:38:00 crc kubenswrapper[4678]: I1124 11:38:00.297098 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:38:00 crc kubenswrapper[4678]: I1124 11:38:00.297149 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:38:00 crc kubenswrapper[4678]: I1124 11:38:00.298045 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd5ea218f678046a66e5b35e3df6bfeb83c4a006c488a84e5029cd1536ff6717"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:38:00 crc kubenswrapper[4678]: I1124 11:38:00.298114 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://dd5ea218f678046a66e5b35e3df6bfeb83c4a006c488a84e5029cd1536ff6717" gracePeriod=600 Nov 24 11:38:00 crc kubenswrapper[4678]: I1124 11:38:00.533164 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="dd5ea218f678046a66e5b35e3df6bfeb83c4a006c488a84e5029cd1536ff6717" exitCode=0 Nov 24 11:38:00 crc kubenswrapper[4678]: I1124 11:38:00.533244 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"dd5ea218f678046a66e5b35e3df6bfeb83c4a006c488a84e5029cd1536ff6717"} Nov 24 11:38:00 crc kubenswrapper[4678]: I1124 11:38:00.533291 4678 scope.go:117] "RemoveContainer" containerID="ae5ad808ee433867f6ed22b16c3cabcd9999e49e8fb7ad6c2494c4e5839c237e" Nov 24 11:38:00 crc kubenswrapper[4678]: I1124 11:38:00.540785 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"257bbe91-8baa-435d-9caf-a4945285bfe7","Type":"ContainerStarted","Data":"d16f5d67e70693f2d5bfe1da45ff7aa26c083105a581d3c7adf327f008b22548"} Nov 24 11:38:00 crc kubenswrapper[4678]: I1124 11:38:00.540881 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:38:00 crc kubenswrapper[4678]: I1124 11:38:00.592775 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.29843623 podStartE2EDuration="5.592749256s" podCreationTimestamp="2025-11-24 11:37:55 +0000 UTC" firstStartedPulling="2025-11-24 11:37:56.420414257 +0000 UTC m=+1287.351473896" lastFinishedPulling="2025-11-24 11:37:59.714727263 +0000 UTC m=+1290.645786922" observedRunningTime="2025-11-24 11:38:00.563945786 +0000 UTC m=+1291.495005515" watchObservedRunningTime="2025-11-24 11:38:00.592749256 +0000 UTC m=+1291.523808915" Nov 24 11:38:00 crc kubenswrapper[4678]: I1124 11:38:00.985932 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.132456 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-db-sync-config-data\") pod \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.132546 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z99c7\" (UniqueName: \"kubernetes.io/projected/7bf1a661-b2a3-458a-b504-2cac3277bd5d-kube-api-access-z99c7\") pod \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.132737 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-config-data\") pod \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.132758 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-combined-ca-bundle\") pod \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.132774 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-scripts\") pod \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.132824 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7bf1a661-b2a3-458a-b504-2cac3277bd5d-etc-machine-id\") pod \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\" (UID: \"7bf1a661-b2a3-458a-b504-2cac3277bd5d\") " Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.133400 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bf1a661-b2a3-458a-b504-2cac3277bd5d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7bf1a661-b2a3-458a-b504-2cac3277bd5d" (UID: "7bf1a661-b2a3-458a-b504-2cac3277bd5d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.147409 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bf1a661-b2a3-458a-b504-2cac3277bd5d-kube-api-access-z99c7" (OuterVolumeSpecName: "kube-api-access-z99c7") pod "7bf1a661-b2a3-458a-b504-2cac3277bd5d" (UID: "7bf1a661-b2a3-458a-b504-2cac3277bd5d"). InnerVolumeSpecName "kube-api-access-z99c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.148283 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7bf1a661-b2a3-458a-b504-2cac3277bd5d" (UID: "7bf1a661-b2a3-458a-b504-2cac3277bd5d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.153601 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-scripts" (OuterVolumeSpecName: "scripts") pod "7bf1a661-b2a3-458a-b504-2cac3277bd5d" (UID: "7bf1a661-b2a3-458a-b504-2cac3277bd5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.163900 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bf1a661-b2a3-458a-b504-2cac3277bd5d" (UID: "7bf1a661-b2a3-458a-b504-2cac3277bd5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.198078 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-config-data" (OuterVolumeSpecName: "config-data") pod "7bf1a661-b2a3-458a-b504-2cac3277bd5d" (UID: "7bf1a661-b2a3-458a-b504-2cac3277bd5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.236527 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.236586 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.236628 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.236647 4678 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7bf1a661-b2a3-458a-b504-2cac3277bd5d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.236693 4678 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7bf1a661-b2a3-458a-b504-2cac3277bd5d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.236714 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z99c7\" (UniqueName: \"kubernetes.io/projected/7bf1a661-b2a3-458a-b504-2cac3277bd5d-kube-api-access-z99c7\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.416496 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-fd86b57f4-94kch" podUID="030f716d-d22a-4024-972e-4c3261a22325" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.197:9311/healthcheck\": read tcp 10.217.0.2:36066->10.217.0.197:9311: read: connection reset by peer" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.417099 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-fd86b57f4-94kch" podUID="030f716d-d22a-4024-972e-4c3261a22325" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.197:9311/healthcheck\": read tcp 10.217.0.2:36082->10.217.0.197:9311: read: connection reset by peer" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.570435 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qx8wj" event={"ID":"7bf1a661-b2a3-458a-b504-2cac3277bd5d","Type":"ContainerDied","Data":"9acbced18916141ff136778636c6c693ef603d3124cf4b1155f394e4aa53e51a"} Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.570478 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9acbced18916141ff136778636c6c693ef603d3124cf4b1155f394e4aa53e51a" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.570473 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qx8wj" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.578795 4678 generic.go:334] "Generic (PLEG): container finished" podID="030f716d-d22a-4024-972e-4c3261a22325" containerID="37af7d1a0fdf29615781e30f434a496572df6701f66931a06df1442e10094e93" exitCode=0 Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.578856 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd86b57f4-94kch" event={"ID":"030f716d-d22a-4024-972e-4c3261a22325","Type":"ContainerDied","Data":"37af7d1a0fdf29615781e30f434a496572df6701f66931a06df1442e10094e93"} Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.583371 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363"} Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.835902 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:38:01 crc kubenswrapper[4678]: E1124 11:38:01.845727 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bf1a661-b2a3-458a-b504-2cac3277bd5d" containerName="cinder-db-sync" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.845757 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bf1a661-b2a3-458a-b504-2cac3277bd5d" containerName="cinder-db-sync" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.846104 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bf1a661-b2a3-458a-b504-2cac3277bd5d" containerName="cinder-db-sync" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.849025 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.853345 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.853394 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-98r9d" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.854927 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.854956 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.861391 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.876801 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.944402 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-dxgvv"] Nov 24 11:38:01 crc kubenswrapper[4678]: E1124 11:38:01.944893 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030f716d-d22a-4024-972e-4c3261a22325" containerName="barbican-api-log" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.944908 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="030f716d-d22a-4024-972e-4c3261a22325" containerName="barbican-api-log" Nov 24 11:38:01 crc kubenswrapper[4678]: E1124 11:38:01.944955 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030f716d-d22a-4024-972e-4c3261a22325" containerName="barbican-api" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.944964 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="030f716d-d22a-4024-972e-4c3261a22325" containerName="barbican-api" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.951009 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-config-data\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.951115 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.951176 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jg5m\" (UniqueName: \"kubernetes.io/projected/fa325b77-8734-4325-a644-e4b421e45843-kube-api-access-5jg5m\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.951323 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-scripts\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.951415 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.951462 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fa325b77-8734-4325-a644-e4b421e45843-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.952032 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="030f716d-d22a-4024-972e-4c3261a22325" containerName="barbican-api-log" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.952087 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="030f716d-d22a-4024-972e-4c3261a22325" containerName="barbican-api" Nov 24 11:38:01 crc kubenswrapper[4678]: I1124 11:38:01.968899 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.024716 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-dxgvv"] Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.053447 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-config-data\") pod \"030f716d-d22a-4024-972e-4c3261a22325\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.059198 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwwtx\" (UniqueName: \"kubernetes.io/projected/030f716d-d22a-4024-972e-4c3261a22325-kube-api-access-cwwtx\") pod \"030f716d-d22a-4024-972e-4c3261a22325\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.059427 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/030f716d-d22a-4024-972e-4c3261a22325-logs\") pod \"030f716d-d22a-4024-972e-4c3261a22325\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.059541 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-config-data-custom\") pod \"030f716d-d22a-4024-972e-4c3261a22325\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.059689 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-combined-ca-bundle\") pod \"030f716d-d22a-4024-972e-4c3261a22325\" (UID: \"030f716d-d22a-4024-972e-4c3261a22325\") " Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.060129 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.060218 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/030f716d-d22a-4024-972e-4c3261a22325-logs" (OuterVolumeSpecName: "logs") pod "030f716d-d22a-4024-972e-4c3261a22325" (UID: "030f716d-d22a-4024-972e-4c3261a22325"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.060222 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jg5m\" (UniqueName: \"kubernetes.io/projected/fa325b77-8734-4325-a644-e4b421e45843-kube-api-access-5jg5m\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.060424 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-scripts\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.060497 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.060531 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fa325b77-8734-4325-a644-e4b421e45843-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.060728 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-config-data\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.060820 4678 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/030f716d-d22a-4024-972e-4c3261a22325-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.062766 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fa325b77-8734-4325-a644-e4b421e45843-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.066955 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-scripts\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.067307 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.081411 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.083166 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-config-data\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.083321 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/030f716d-d22a-4024-972e-4c3261a22325-kube-api-access-cwwtx" (OuterVolumeSpecName: "kube-api-access-cwwtx") pod "030f716d-d22a-4024-972e-4c3261a22325" (UID: "030f716d-d22a-4024-972e-4c3261a22325"). InnerVolumeSpecName "kube-api-access-cwwtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.100823 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jg5m\" (UniqueName: \"kubernetes.io/projected/fa325b77-8734-4325-a644-e4b421e45843-kube-api-access-5jg5m\") pod \"cinder-scheduler-0\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.100944 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "030f716d-d22a-4024-972e-4c3261a22325" (UID: "030f716d-d22a-4024-972e-4c3261a22325"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.133764 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.138565 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.144629 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.163276 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-dns-svc\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.163530 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.163826 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-config\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.163937 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.164252 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmtj2\" (UniqueName: \"kubernetes.io/projected/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-kube-api-access-zmtj2\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.164452 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.164715 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwwtx\" (UniqueName: \"kubernetes.io/projected/030f716d-d22a-4024-972e-4c3261a22325-kube-api-access-cwwtx\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.164823 4678 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.175425 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.178451 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-config-data" (OuterVolumeSpecName: "config-data") pod "030f716d-d22a-4024-972e-4c3261a22325" (UID: "030f716d-d22a-4024-972e-4c3261a22325"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.183318 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "030f716d-d22a-4024-972e-4c3261a22325" (UID: "030f716d-d22a-4024-972e-4c3261a22325"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.267268 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.267352 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhnx8\" (UniqueName: \"kubernetes.io/projected/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-kube-api-access-nhnx8\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.267377 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-config-data-custom\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.267436 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-scripts\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.267462 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-logs\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.267494 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-dns-svc\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.267528 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-config-data\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.267546 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.267567 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-config\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.267598 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.267623 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.267680 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmtj2\" (UniqueName: \"kubernetes.io/projected/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-kube-api-access-zmtj2\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.267712 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.267762 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.267773 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/030f716d-d22a-4024-972e-4c3261a22325-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.268949 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-dns-svc\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.269054 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.269015 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.268988 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.270303 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-config\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.274056 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.285561 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmtj2\" (UniqueName: \"kubernetes.io/projected/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-kube-api-access-zmtj2\") pod \"dnsmasq-dns-5784cf869f-dxgvv\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.302080 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.328272 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-697d9cc569-8n57v" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.369391 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-config-data\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.369491 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.369592 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.369693 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhnx8\" (UniqueName: \"kubernetes.io/projected/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-kube-api-access-nhnx8\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.369725 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-config-data-custom\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.369798 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-scripts\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.369833 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-logs\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.370422 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-logs\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.375874 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-config-data\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.378106 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.380311 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-config-data-custom\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.383089 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.386535 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-scripts\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.414141 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhnx8\" (UniqueName: \"kubernetes.io/projected/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-kube-api-access-nhnx8\") pod \"cinder-api-0\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.449712 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85857bf94-wpbc7"] Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.450052 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85857bf94-wpbc7" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerName="neutron-api" containerID="cri-o://c67ff5f4392859d892fba844dbd76aea0671eb358cdb2961e81fad8ab5e1364e" gracePeriod=30 Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.559998 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.622845 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd86b57f4-94kch" event={"ID":"030f716d-d22a-4024-972e-4c3261a22325","Type":"ContainerDied","Data":"ff21ffd9a15748bcbc4abf68b3ab7e8965897935ec7a7de55ba59417f5c5470b"} Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.622910 4678 scope.go:117] "RemoveContainer" containerID="37af7d1a0fdf29615781e30f434a496572df6701f66931a06df1442e10094e93" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.623146 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fd86b57f4-94kch" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.690702 4678 scope.go:117] "RemoveContainer" containerID="2295dfbf92071bbefdec8f6dd079bf549e47ca032a35bbd3169b546f0dd95f2b" Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.743011 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-fd86b57f4-94kch"] Nov 24 11:38:02 crc kubenswrapper[4678]: I1124 11:38:02.772041 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-fd86b57f4-94kch"] Nov 24 11:38:03 crc kubenswrapper[4678]: I1124 11:38:03.007619 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:38:03 crc kubenswrapper[4678]: W1124 11:38:03.055188 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa325b77_8734_4325_a644_e4b421e45843.slice/crio-ea1e488edfd336b0805dadb17bbfdbd98a4e7d723994f55218c42f60c940a6a7 WatchSource:0}: Error finding container ea1e488edfd336b0805dadb17bbfdbd98a4e7d723994f55218c42f60c940a6a7: Status 404 returned error can't find the container with id ea1e488edfd336b0805dadb17bbfdbd98a4e7d723994f55218c42f60c940a6a7 Nov 24 11:38:03 crc kubenswrapper[4678]: I1124 11:38:03.225743 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-dxgvv"] Nov 24 11:38:03 crc kubenswrapper[4678]: I1124 11:38:03.383196 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:38:03 crc kubenswrapper[4678]: I1124 11:38:03.647566 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c","Type":"ContainerStarted","Data":"597c1e1b421ba5c834974733420d479d036821e5745112ee8fa55ff0debd6b45"} Nov 24 11:38:03 crc kubenswrapper[4678]: I1124 11:38:03.650875 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" event={"ID":"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248","Type":"ContainerStarted","Data":"c7c2cb7000eb5d1e644d749f9dbf3c374ed0ddbf1c2666f58f6e705da94aebf1"} Nov 24 11:38:03 crc kubenswrapper[4678]: I1124 11:38:03.650924 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" event={"ID":"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248","Type":"ContainerStarted","Data":"0574f96e8898ad762d9b667f6349c5c55ca90c14d665fa19943c451258ae62ca"} Nov 24 11:38:03 crc kubenswrapper[4678]: I1124 11:38:03.656785 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa325b77-8734-4325-a644-e4b421e45843","Type":"ContainerStarted","Data":"ea1e488edfd336b0805dadb17bbfdbd98a4e7d723994f55218c42f60c940a6a7"} Nov 24 11:38:03 crc kubenswrapper[4678]: I1124 11:38:03.927909 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="030f716d-d22a-4024-972e-4c3261a22325" path="/var/lib/kubelet/pods/030f716d-d22a-4024-972e-4c3261a22325/volumes" Nov 24 11:38:04 crc kubenswrapper[4678]: I1124 11:38:04.339804 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:38:04 crc kubenswrapper[4678]: I1124 11:38:04.748526 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c","Type":"ContainerStarted","Data":"b8beabc8c137bd68cdc9d83cced8b55faaac6d260af150e9c7974af1c7cb1374"} Nov 24 11:38:04 crc kubenswrapper[4678]: I1124 11:38:04.755287 4678 generic.go:334] "Generic (PLEG): container finished" podID="eaf6d4b1-0dd0-4d17-b7f2-8503259f4248" containerID="c7c2cb7000eb5d1e644d749f9dbf3c374ed0ddbf1c2666f58f6e705da94aebf1" exitCode=0 Nov 24 11:38:04 crc kubenswrapper[4678]: I1124 11:38:04.755323 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" event={"ID":"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248","Type":"ContainerDied","Data":"c7c2cb7000eb5d1e644d749f9dbf3c374ed0ddbf1c2666f58f6e705da94aebf1"} Nov 24 11:38:05 crc kubenswrapper[4678]: I1124 11:38:05.780428 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c","Type":"ContainerStarted","Data":"0146b480d3a5f09b8eccf47c2ede2fd87021480f5bf7b5d1a65e7559d2e743d8"} Nov 24 11:38:05 crc kubenswrapper[4678]: I1124 11:38:05.781025 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 11:38:05 crc kubenswrapper[4678]: I1124 11:38:05.780563 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" containerName="cinder-api" containerID="cri-o://0146b480d3a5f09b8eccf47c2ede2fd87021480f5bf7b5d1a65e7559d2e743d8" gracePeriod=30 Nov 24 11:38:05 crc kubenswrapper[4678]: I1124 11:38:05.780506 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" containerName="cinder-api-log" containerID="cri-o://b8beabc8c137bd68cdc9d83cced8b55faaac6d260af150e9c7974af1c7cb1374" gracePeriod=30 Nov 24 11:38:05 crc kubenswrapper[4678]: I1124 11:38:05.784338 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa325b77-8734-4325-a644-e4b421e45843","Type":"ContainerStarted","Data":"66c088820efbb57f1c35cd66e650ed595b0fe545a537501af0c0dd2dd700d7e7"} Nov 24 11:38:05 crc kubenswrapper[4678]: I1124 11:38:05.791348 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" event={"ID":"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248","Type":"ContainerStarted","Data":"f53ec4a8665ae9bc3cdafc701373e934ca91b647874b4b1c228d135ffce87317"} Nov 24 11:38:05 crc kubenswrapper[4678]: I1124 11:38:05.791698 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:05 crc kubenswrapper[4678]: I1124 11:38:05.810702 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.810663784 podStartE2EDuration="3.810663784s" podCreationTimestamp="2025-11-24 11:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:05.806461022 +0000 UTC m=+1296.737520661" watchObservedRunningTime="2025-11-24 11:38:05.810663784 +0000 UTC m=+1296.741723433" Nov 24 11:38:05 crc kubenswrapper[4678]: I1124 11:38:05.830724 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" podStartSLOduration=4.83070239 podStartE2EDuration="4.83070239s" podCreationTimestamp="2025-11-24 11:38:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:05.82435866 +0000 UTC m=+1296.755418299" watchObservedRunningTime="2025-11-24 11:38:05.83070239 +0000 UTC m=+1296.761762029" Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.803611 4678 generic.go:334] "Generic (PLEG): container finished" podID="c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" containerID="0146b480d3a5f09b8eccf47c2ede2fd87021480f5bf7b5d1a65e7559d2e743d8" exitCode=0 Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.804290 4678 generic.go:334] "Generic (PLEG): container finished" podID="c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" containerID="b8beabc8c137bd68cdc9d83cced8b55faaac6d260af150e9c7974af1c7cb1374" exitCode=143 Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.803802 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c","Type":"ContainerDied","Data":"0146b480d3a5f09b8eccf47c2ede2fd87021480f5bf7b5d1a65e7559d2e743d8"} Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.804349 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c","Type":"ContainerDied","Data":"b8beabc8c137bd68cdc9d83cced8b55faaac6d260af150e9c7974af1c7cb1374"} Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.804381 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c","Type":"ContainerDied","Data":"597c1e1b421ba5c834974733420d479d036821e5745112ee8fa55ff0debd6b45"} Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.804398 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="597c1e1b421ba5c834974733420d479d036821e5745112ee8fa55ff0debd6b45" Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.807421 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa325b77-8734-4325-a644-e4b421e45843","Type":"ContainerStarted","Data":"5341be6bf663002642cc3e5f34199ba12ae5806d6a7799861c2c3d695e9c416b"} Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.819014 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.838441 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.83148582 podStartE2EDuration="5.838422029s" podCreationTimestamp="2025-11-24 11:38:01 +0000 UTC" firstStartedPulling="2025-11-24 11:38:03.068480134 +0000 UTC m=+1293.999539773" lastFinishedPulling="2025-11-24 11:38:04.075416343 +0000 UTC m=+1295.006475982" observedRunningTime="2025-11-24 11:38:06.830086776 +0000 UTC m=+1297.761146425" watchObservedRunningTime="2025-11-24 11:38:06.838422029 +0000 UTC m=+1297.769481668" Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.934038 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-scripts\") pod \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.934194 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhnx8\" (UniqueName: \"kubernetes.io/projected/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-kube-api-access-nhnx8\") pod \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.935149 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-config-data-custom\") pod \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.935292 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-combined-ca-bundle\") pod \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.935656 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-etc-machine-id\") pod \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.935745 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" (UID: "c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.935764 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-logs\") pod \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.935793 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-config-data\") pod \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\" (UID: \"c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c\") " Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.936594 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-logs" (OuterVolumeSpecName: "logs") pod "c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" (UID: "c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.937057 4678 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.937382 4678 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.941736 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-kube-api-access-nhnx8" (OuterVolumeSpecName: "kube-api-access-nhnx8") pod "c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" (UID: "c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c"). InnerVolumeSpecName "kube-api-access-nhnx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.949912 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" (UID: "c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.954920 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-scripts" (OuterVolumeSpecName: "scripts") pod "c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" (UID: "c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:06 crc kubenswrapper[4678]: I1124 11:38:06.967471 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" (UID: "c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:06.997000 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-config-data" (OuterVolumeSpecName: "config-data") pod "c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" (UID: "c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.039725 4678 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.039755 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.039765 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.039776 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.039785 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhnx8\" (UniqueName: \"kubernetes.io/projected/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c-kube-api-access-nhnx8\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.274950 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.818551 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.856554 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.868184 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.886477 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:38:07 crc kubenswrapper[4678]: E1124 11:38:07.887049 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" containerName="cinder-api-log" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.887076 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" containerName="cinder-api-log" Nov 24 11:38:07 crc kubenswrapper[4678]: E1124 11:38:07.887135 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" containerName="cinder-api" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.887145 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" containerName="cinder-api" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.887486 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" containerName="cinder-api" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.887523 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" containerName="cinder-api-log" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.889327 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.892179 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.892537 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.893023 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.927350 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c" path="/var/lib/kubelet/pods/c8bcfd4b-eb34-41e2-bbe8-7e842dbaaa9c/volumes" Nov 24 11:38:07 crc kubenswrapper[4678]: I1124 11:38:07.928091 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.061526 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-config-data\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.061777 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-config-data-custom\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.061821 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-scripts\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.061849 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.061979 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.063183 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whk84\" (UniqueName: \"kubernetes.io/projected/32725d0f-f32f-4ec4-9982-ebae7a555802-kube-api-access-whk84\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.063272 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32725d0f-f32f-4ec4-9982-ebae7a555802-etc-machine-id\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.063299 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-public-tls-certs\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.063343 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32725d0f-f32f-4ec4-9982-ebae7a555802-logs\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.165890 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-config-data\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.166021 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-config-data-custom\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.166046 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-scripts\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.166163 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.166704 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.166737 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whk84\" (UniqueName: \"kubernetes.io/projected/32725d0f-f32f-4ec4-9982-ebae7a555802-kube-api-access-whk84\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.166789 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32725d0f-f32f-4ec4-9982-ebae7a555802-etc-machine-id\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.166811 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-public-tls-certs\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.166842 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32725d0f-f32f-4ec4-9982-ebae7a555802-logs\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.167205 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32725d0f-f32f-4ec4-9982-ebae7a555802-etc-machine-id\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.167220 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32725d0f-f32f-4ec4-9982-ebae7a555802-logs\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.171836 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.172336 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.172437 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-config-data\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.172492 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-public-tls-certs\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.172530 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-config-data-custom\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.187390 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whk84\" (UniqueName: \"kubernetes.io/projected/32725d0f-f32f-4ec4-9982-ebae7a555802-kube-api-access-whk84\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.187727 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32725d0f-f32f-4ec4-9982-ebae7a555802-scripts\") pod \"cinder-api-0\" (UID: \"32725d0f-f32f-4ec4-9982-ebae7a555802\") " pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.216969 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.710707 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 11:38:08 crc kubenswrapper[4678]: W1124 11:38:08.735896 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32725d0f_f32f_4ec4_9982_ebae7a555802.slice/crio-f95e6b0c12a7b5e5f0e56f72394f7412c236dfb61418678fcb1685d54a2effd8 WatchSource:0}: Error finding container f95e6b0c12a7b5e5f0e56f72394f7412c236dfb61418678fcb1685d54a2effd8: Status 404 returned error can't find the container with id f95e6b0c12a7b5e5f0e56f72394f7412c236dfb61418678fcb1685d54a2effd8 Nov 24 11:38:08 crc kubenswrapper[4678]: I1124 11:38:08.838792 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"32725d0f-f32f-4ec4-9982-ebae7a555802","Type":"ContainerStarted","Data":"f95e6b0c12a7b5e5f0e56f72394f7412c236dfb61418678fcb1685d54a2effd8"} Nov 24 11:38:09 crc kubenswrapper[4678]: I1124 11:38:09.849475 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"32725d0f-f32f-4ec4-9982-ebae7a555802","Type":"ContainerStarted","Data":"2555d838ee3878b782e90387a938c6921b6aae11f0f065a7a0b6763a0a117676"} Nov 24 11:38:10 crc kubenswrapper[4678]: I1124 11:38:10.863169 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"32725d0f-f32f-4ec4-9982-ebae7a555802","Type":"ContainerStarted","Data":"b7dc1834aa7a0a78097e6f5ff6958b1ca6b1060d95409666bc79f45755887cd9"} Nov 24 11:38:10 crc kubenswrapper[4678]: I1124 11:38:10.863696 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 11:38:12 crc kubenswrapper[4678]: I1124 11:38:12.303850 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:12 crc kubenswrapper[4678]: I1124 11:38:12.341842 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.341824606 podStartE2EDuration="5.341824606s" podCreationTimestamp="2025-11-24 11:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:10.88517689 +0000 UTC m=+1301.816236529" watchObservedRunningTime="2025-11-24 11:38:12.341824606 +0000 UTC m=+1303.272884245" Nov 24 11:38:12 crc kubenswrapper[4678]: I1124 11:38:12.364930 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mrl65"] Nov 24 11:38:12 crc kubenswrapper[4678]: I1124 11:38:12.365222 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" podUID="7ab25308-baab-4b92-8bbb-7525b0e96550" containerName="dnsmasq-dns" containerID="cri-o://c57561c73afcc17e6a1ae4fd758b441a2ef9c94b15120b868b6bc4c2a6b7e409" gracePeriod=10 Nov 24 11:38:12 crc kubenswrapper[4678]: I1124 11:38:12.643173 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:38:12 crc kubenswrapper[4678]: I1124 11:38:12.676439 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 11:38:12 crc kubenswrapper[4678]: I1124 11:38:12.694742 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f4c4bbb96-gnmrh" Nov 24 11:38:12 crc kubenswrapper[4678]: I1124 11:38:12.739045 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:38:12 crc kubenswrapper[4678]: I1124 11:38:12.912877 4678 generic.go:334] "Generic (PLEG): container finished" podID="7ab25308-baab-4b92-8bbb-7525b0e96550" containerID="c57561c73afcc17e6a1ae4fd758b441a2ef9c94b15120b868b6bc4c2a6b7e409" exitCode=0 Nov 24 11:38:12 crc kubenswrapper[4678]: I1124 11:38:12.913107 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fa325b77-8734-4325-a644-e4b421e45843" containerName="cinder-scheduler" containerID="cri-o://66c088820efbb57f1c35cd66e650ed595b0fe545a537501af0c0dd2dd700d7e7" gracePeriod=30 Nov 24 11:38:12 crc kubenswrapper[4678]: I1124 11:38:12.913361 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" event={"ID":"7ab25308-baab-4b92-8bbb-7525b0e96550","Type":"ContainerDied","Data":"c57561c73afcc17e6a1ae4fd758b441a2ef9c94b15120b868b6bc4c2a6b7e409"} Nov 24 11:38:12 crc kubenswrapper[4678]: I1124 11:38:12.914508 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fa325b77-8734-4325-a644-e4b421e45843" containerName="probe" containerID="cri-o://5341be6bf663002642cc3e5f34199ba12ae5806d6a7799861c2c3d695e9c416b" gracePeriod=30 Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.103059 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.188633 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-dns-swift-storage-0\") pod \"7ab25308-baab-4b92-8bbb-7525b0e96550\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.189125 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-dns-svc\") pod \"7ab25308-baab-4b92-8bbb-7525b0e96550\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.189150 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-ovsdbserver-nb\") pod \"7ab25308-baab-4b92-8bbb-7525b0e96550\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.189202 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-config\") pod \"7ab25308-baab-4b92-8bbb-7525b0e96550\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.189246 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-872m4\" (UniqueName: \"kubernetes.io/projected/7ab25308-baab-4b92-8bbb-7525b0e96550-kube-api-access-872m4\") pod \"7ab25308-baab-4b92-8bbb-7525b0e96550\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.189284 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-ovsdbserver-sb\") pod \"7ab25308-baab-4b92-8bbb-7525b0e96550\" (UID: \"7ab25308-baab-4b92-8bbb-7525b0e96550\") " Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.196007 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab25308-baab-4b92-8bbb-7525b0e96550-kube-api-access-872m4" (OuterVolumeSpecName: "kube-api-access-872m4") pod "7ab25308-baab-4b92-8bbb-7525b0e96550" (UID: "7ab25308-baab-4b92-8bbb-7525b0e96550"). InnerVolumeSpecName "kube-api-access-872m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.281681 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7ab25308-baab-4b92-8bbb-7525b0e96550" (UID: "7ab25308-baab-4b92-8bbb-7525b0e96550"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.291860 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-872m4\" (UniqueName: \"kubernetes.io/projected/7ab25308-baab-4b92-8bbb-7525b0e96550-kube-api-access-872m4\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.291891 4678 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.293076 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7ab25308-baab-4b92-8bbb-7525b0e96550" (UID: "7ab25308-baab-4b92-8bbb-7525b0e96550"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.297148 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7ab25308-baab-4b92-8bbb-7525b0e96550" (UID: "7ab25308-baab-4b92-8bbb-7525b0e96550"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.305850 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ab25308-baab-4b92-8bbb-7525b0e96550" (UID: "7ab25308-baab-4b92-8bbb-7525b0e96550"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.332146 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-config" (OuterVolumeSpecName: "config") pod "7ab25308-baab-4b92-8bbb-7525b0e96550" (UID: "7ab25308-baab-4b92-8bbb-7525b0e96550"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.393345 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.393379 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.393391 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.393399 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ab25308-baab-4b92-8bbb-7525b0e96550-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.738218 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7cb75676bc-dmjv6" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.926963 4678 generic.go:334] "Generic (PLEG): container finished" podID="fa325b77-8734-4325-a644-e4b421e45843" containerID="5341be6bf663002642cc3e5f34199ba12ae5806d6a7799861c2c3d695e9c416b" exitCode=0 Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.927045 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa325b77-8734-4325-a644-e4b421e45843","Type":"ContainerDied","Data":"5341be6bf663002642cc3e5f34199ba12ae5806d6a7799861c2c3d695e9c416b"} Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.929215 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" event={"ID":"7ab25308-baab-4b92-8bbb-7525b0e96550","Type":"ContainerDied","Data":"b0e16f1eac9b87fe182e25da0289953778051dc913038ccdccfccb6ac3f01d45"} Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.929275 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-mrl65" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.929285 4678 scope.go:117] "RemoveContainer" containerID="c57561c73afcc17e6a1ae4fd758b441a2ef9c94b15120b868b6bc4c2a6b7e409" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.958056 4678 scope.go:117] "RemoveContainer" containerID="247cd0024fb124946eca4e4e0b74c60ee385b0ac359bb90de64566a8cb7d3dff" Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.959072 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mrl65"] Nov 24 11:38:13 crc kubenswrapper[4678]: I1124 11:38:13.978084 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mrl65"] Nov 24 11:38:14 crc kubenswrapper[4678]: I1124 11:38:14.944728 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85857bf94-wpbc7_b249aa27-98b1-40ce-85ab-5b7d0a8edf15/neutron-httpd/2.log" Nov 24 11:38:14 crc kubenswrapper[4678]: I1124 11:38:14.945310 4678 generic.go:334] "Generic (PLEG): container finished" podID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerID="c67ff5f4392859d892fba844dbd76aea0671eb358cdb2961e81fad8ab5e1364e" exitCode=0 Nov 24 11:38:14 crc kubenswrapper[4678]: I1124 11:38:14.945337 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85857bf94-wpbc7" event={"ID":"b249aa27-98b1-40ce-85ab-5b7d0a8edf15","Type":"ContainerDied","Data":"c67ff5f4392859d892fba844dbd76aea0671eb358cdb2961e81fad8ab5e1364e"} Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.352871 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85857bf94-wpbc7_b249aa27-98b1-40ce-85ab-5b7d0a8edf15/neutron-httpd/2.log" Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.358351 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.436684 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zxwc\" (UniqueName: \"kubernetes.io/projected/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-kube-api-access-4zxwc\") pod \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.436992 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-config\") pod \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.437058 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-ovndb-tls-certs\") pod \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.437191 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-httpd-config\") pod \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.437374 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-combined-ca-bundle\") pod \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\" (UID: \"b249aa27-98b1-40ce-85ab-5b7d0a8edf15\") " Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.452392 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-kube-api-access-4zxwc" (OuterVolumeSpecName: "kube-api-access-4zxwc") pod "b249aa27-98b1-40ce-85ab-5b7d0a8edf15" (UID: "b249aa27-98b1-40ce-85ab-5b7d0a8edf15"). InnerVolumeSpecName "kube-api-access-4zxwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.458386 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b249aa27-98b1-40ce-85ab-5b7d0a8edf15" (UID: "b249aa27-98b1-40ce-85ab-5b7d0a8edf15"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.523567 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-config" (OuterVolumeSpecName: "config") pod "b249aa27-98b1-40ce-85ab-5b7d0a8edf15" (UID: "b249aa27-98b1-40ce-85ab-5b7d0a8edf15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.535545 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b249aa27-98b1-40ce-85ab-5b7d0a8edf15" (UID: "b249aa27-98b1-40ce-85ab-5b7d0a8edf15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.539467 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.539528 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zxwc\" (UniqueName: \"kubernetes.io/projected/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-kube-api-access-4zxwc\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.539542 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.539552 4678 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.541723 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b249aa27-98b1-40ce-85ab-5b7d0a8edf15" (UID: "b249aa27-98b1-40ce-85ab-5b7d0a8edf15"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.641201 4678 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b249aa27-98b1-40ce-85ab-5b7d0a8edf15-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.914316 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ab25308-baab-4b92-8bbb-7525b0e96550" path="/var/lib/kubelet/pods/7ab25308-baab-4b92-8bbb-7525b0e96550/volumes" Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.960363 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85857bf94-wpbc7_b249aa27-98b1-40ce-85ab-5b7d0a8edf15/neutron-httpd/2.log" Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.961065 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85857bf94-wpbc7" event={"ID":"b249aa27-98b1-40ce-85ab-5b7d0a8edf15","Type":"ContainerDied","Data":"dbee23641c5101139417af74bdd9e03ee19dd032b70cb424fa5b4bbcfd02a0d6"} Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.961111 4678 scope.go:117] "RemoveContainer" containerID="d567d5591efd267f5c56d60a89cae35438ee02e5dfef75b7482d21285b77ae32" Nov 24 11:38:15 crc kubenswrapper[4678]: I1124 11:38:15.961220 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85857bf94-wpbc7" Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:15.999792 4678 scope.go:117] "RemoveContainer" containerID="c67ff5f4392859d892fba844dbd76aea0671eb358cdb2961e81fad8ab5e1364e" Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.006684 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85857bf94-wpbc7"] Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.023380 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-85857bf94-wpbc7"] Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.840507 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.976206 4678 generic.go:334] "Generic (PLEG): container finished" podID="fa325b77-8734-4325-a644-e4b421e45843" containerID="66c088820efbb57f1c35cd66e650ed595b0fe545a537501af0c0dd2dd700d7e7" exitCode=0 Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.976287 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.976286 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa325b77-8734-4325-a644-e4b421e45843","Type":"ContainerDied","Data":"66c088820efbb57f1c35cd66e650ed595b0fe545a537501af0c0dd2dd700d7e7"} Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.976447 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa325b77-8734-4325-a644-e4b421e45843","Type":"ContainerDied","Data":"ea1e488edfd336b0805dadb17bbfdbd98a4e7d723994f55218c42f60c940a6a7"} Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.976500 4678 scope.go:117] "RemoveContainer" containerID="5341be6bf663002642cc3e5f34199ba12ae5806d6a7799861c2c3d695e9c416b" Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.980258 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-scripts\") pod \"fa325b77-8734-4325-a644-e4b421e45843\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.980412 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-config-data-custom\") pod \"fa325b77-8734-4325-a644-e4b421e45843\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.980522 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-config-data\") pod \"fa325b77-8734-4325-a644-e4b421e45843\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.980621 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jg5m\" (UniqueName: \"kubernetes.io/projected/fa325b77-8734-4325-a644-e4b421e45843-kube-api-access-5jg5m\") pod \"fa325b77-8734-4325-a644-e4b421e45843\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.980776 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fa325b77-8734-4325-a644-e4b421e45843-etc-machine-id\") pod \"fa325b77-8734-4325-a644-e4b421e45843\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.980894 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-combined-ca-bundle\") pod \"fa325b77-8734-4325-a644-e4b421e45843\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.983772 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa325b77-8734-4325-a644-e4b421e45843-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fa325b77-8734-4325-a644-e4b421e45843" (UID: "fa325b77-8734-4325-a644-e4b421e45843"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.990507 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fa325b77-8734-4325-a644-e4b421e45843" (UID: "fa325b77-8734-4325-a644-e4b421e45843"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.990551 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa325b77-8734-4325-a644-e4b421e45843-kube-api-access-5jg5m" (OuterVolumeSpecName: "kube-api-access-5jg5m") pod "fa325b77-8734-4325-a644-e4b421e45843" (UID: "fa325b77-8734-4325-a644-e4b421e45843"). InnerVolumeSpecName "kube-api-access-5jg5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:16 crc kubenswrapper[4678]: I1124 11:38:16.990754 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-scripts" (OuterVolumeSpecName: "scripts") pod "fa325b77-8734-4325-a644-e4b421e45843" (UID: "fa325b77-8734-4325-a644-e4b421e45843"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.012288 4678 scope.go:117] "RemoveContainer" containerID="66c088820efbb57f1c35cd66e650ed595b0fe545a537501af0c0dd2dd700d7e7" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.089938 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa325b77-8734-4325-a644-e4b421e45843" (UID: "fa325b77-8734-4325-a644-e4b421e45843"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.091079 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-combined-ca-bundle\") pod \"fa325b77-8734-4325-a644-e4b421e45843\" (UID: \"fa325b77-8734-4325-a644-e4b421e45843\") " Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.091704 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jg5m\" (UniqueName: \"kubernetes.io/projected/fa325b77-8734-4325-a644-e4b421e45843-kube-api-access-5jg5m\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.091727 4678 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fa325b77-8734-4325-a644-e4b421e45843-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.091742 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.091756 4678 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:17 crc kubenswrapper[4678]: W1124 11:38:17.091873 4678 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fa325b77-8734-4325-a644-e4b421e45843/volumes/kubernetes.io~secret/combined-ca-bundle Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.091888 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa325b77-8734-4325-a644-e4b421e45843" (UID: "fa325b77-8734-4325-a644-e4b421e45843"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.119832 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-config-data" (OuterVolumeSpecName: "config-data") pod "fa325b77-8734-4325-a644-e4b421e45843" (UID: "fa325b77-8734-4325-a644-e4b421e45843"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.193881 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.194118 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa325b77-8734-4325-a644-e4b421e45843-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.230752 4678 scope.go:117] "RemoveContainer" containerID="5341be6bf663002642cc3e5f34199ba12ae5806d6a7799861c2c3d695e9c416b" Nov 24 11:38:17 crc kubenswrapper[4678]: E1124 11:38:17.231436 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5341be6bf663002642cc3e5f34199ba12ae5806d6a7799861c2c3d695e9c416b\": container with ID starting with 5341be6bf663002642cc3e5f34199ba12ae5806d6a7799861c2c3d695e9c416b not found: ID does not exist" containerID="5341be6bf663002642cc3e5f34199ba12ae5806d6a7799861c2c3d695e9c416b" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.231475 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5341be6bf663002642cc3e5f34199ba12ae5806d6a7799861c2c3d695e9c416b"} err="failed to get container status \"5341be6bf663002642cc3e5f34199ba12ae5806d6a7799861c2c3d695e9c416b\": rpc error: code = NotFound desc = could not find container \"5341be6bf663002642cc3e5f34199ba12ae5806d6a7799861c2c3d695e9c416b\": container with ID starting with 5341be6bf663002642cc3e5f34199ba12ae5806d6a7799861c2c3d695e9c416b not found: ID does not exist" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.231500 4678 scope.go:117] "RemoveContainer" containerID="66c088820efbb57f1c35cd66e650ed595b0fe545a537501af0c0dd2dd700d7e7" Nov 24 11:38:17 crc kubenswrapper[4678]: E1124 11:38:17.231904 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66c088820efbb57f1c35cd66e650ed595b0fe545a537501af0c0dd2dd700d7e7\": container with ID starting with 66c088820efbb57f1c35cd66e650ed595b0fe545a537501af0c0dd2dd700d7e7 not found: ID does not exist" containerID="66c088820efbb57f1c35cd66e650ed595b0fe545a537501af0c0dd2dd700d7e7" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.231930 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66c088820efbb57f1c35cd66e650ed595b0fe545a537501af0c0dd2dd700d7e7"} err="failed to get container status \"66c088820efbb57f1c35cd66e650ed595b0fe545a537501af0c0dd2dd700d7e7\": rpc error: code = NotFound desc = could not find container \"66c088820efbb57f1c35cd66e650ed595b0fe545a537501af0c0dd2dd700d7e7\": container with ID starting with 66c088820efbb57f1c35cd66e650ed595b0fe545a537501af0c0dd2dd700d7e7 not found: ID does not exist" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.314817 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.326084 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.350532 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:38:17 crc kubenswrapper[4678]: E1124 11:38:17.351443 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab25308-baab-4b92-8bbb-7525b0e96550" containerName="init" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.351552 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab25308-baab-4b92-8bbb-7525b0e96550" containerName="init" Nov 24 11:38:17 crc kubenswrapper[4678]: E1124 11:38:17.351645 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa325b77-8734-4325-a644-e4b421e45843" containerName="cinder-scheduler" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.351898 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa325b77-8734-4325-a644-e4b421e45843" containerName="cinder-scheduler" Nov 24 11:38:17 crc kubenswrapper[4678]: E1124 11:38:17.352040 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerName="neutron-httpd" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.352144 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerName="neutron-httpd" Nov 24 11:38:17 crc kubenswrapper[4678]: E1124 11:38:17.352225 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa325b77-8734-4325-a644-e4b421e45843" containerName="probe" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.352312 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa325b77-8734-4325-a644-e4b421e45843" containerName="probe" Nov 24 11:38:17 crc kubenswrapper[4678]: E1124 11:38:17.352452 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab25308-baab-4b92-8bbb-7525b0e96550" containerName="dnsmasq-dns" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.352550 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab25308-baab-4b92-8bbb-7525b0e96550" containerName="dnsmasq-dns" Nov 24 11:38:17 crc kubenswrapper[4678]: E1124 11:38:17.352632 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerName="neutron-httpd" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.352736 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerName="neutron-httpd" Nov 24 11:38:17 crc kubenswrapper[4678]: E1124 11:38:17.352818 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerName="neutron-api" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.352889 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerName="neutron-api" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.353280 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerName="neutron-api" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.353395 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerName="neutron-httpd" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.353500 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab25308-baab-4b92-8bbb-7525b0e96550" containerName="dnsmasq-dns" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.353586 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa325b77-8734-4325-a644-e4b421e45843" containerName="cinder-scheduler" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.353739 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa325b77-8734-4325-a644-e4b421e45843" containerName="probe" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.353824 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerName="neutron-httpd" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.353919 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerName="neutron-httpd" Nov 24 11:38:17 crc kubenswrapper[4678]: E1124 11:38:17.354344 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerName="neutron-httpd" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.354477 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" containerName="neutron-httpd" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.355847 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.358259 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.365431 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.500166 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ebf38af-2df6-49a3-8a00-37ff5996c82e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.500261 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ebf38af-2df6-49a3-8a00-37ff5996c82e-scripts\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.500306 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ebf38af-2df6-49a3-8a00-37ff5996c82e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.500362 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ebf38af-2df6-49a3-8a00-37ff5996c82e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.500410 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2tg8\" (UniqueName: \"kubernetes.io/projected/1ebf38af-2df6-49a3-8a00-37ff5996c82e-kube-api-access-z2tg8\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.500435 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ebf38af-2df6-49a3-8a00-37ff5996c82e-config-data\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.588937 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.590634 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.593123 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-g8lgz" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.593181 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.593136 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.598611 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.604202 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ebf38af-2df6-49a3-8a00-37ff5996c82e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.604289 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ebf38af-2df6-49a3-8a00-37ff5996c82e-scripts\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.604336 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ebf38af-2df6-49a3-8a00-37ff5996c82e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.604399 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ebf38af-2df6-49a3-8a00-37ff5996c82e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.604441 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2tg8\" (UniqueName: \"kubernetes.io/projected/1ebf38af-2df6-49a3-8a00-37ff5996c82e-kube-api-access-z2tg8\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.604466 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ebf38af-2df6-49a3-8a00-37ff5996c82e-config-data\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.604708 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ebf38af-2df6-49a3-8a00-37ff5996c82e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.615569 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ebf38af-2df6-49a3-8a00-37ff5996c82e-scripts\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.616268 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ebf38af-2df6-49a3-8a00-37ff5996c82e-config-data\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.619389 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ebf38af-2df6-49a3-8a00-37ff5996c82e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.621228 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ebf38af-2df6-49a3-8a00-37ff5996c82e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.633810 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2tg8\" (UniqueName: \"kubernetes.io/projected/1ebf38af-2df6-49a3-8a00-37ff5996c82e-kube-api-access-z2tg8\") pod \"cinder-scheduler-0\" (UID: \"1ebf38af-2df6-49a3-8a00-37ff5996c82e\") " pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.705914 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7\") " pod="openstack/openstackclient" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.705965 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj758\" (UniqueName: \"kubernetes.io/projected/b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7-kube-api-access-cj758\") pod \"openstackclient\" (UID: \"b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7\") " pod="openstack/openstackclient" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.706078 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7-openstack-config\") pod \"openstackclient\" (UID: \"b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7\") " pod="openstack/openstackclient" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.706141 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7-openstack-config-secret\") pod \"openstackclient\" (UID: \"b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7\") " pod="openstack/openstackclient" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.712125 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.808921 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7-openstack-config\") pod \"openstackclient\" (UID: \"b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7\") " pod="openstack/openstackclient" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.809019 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7-openstack-config-secret\") pod \"openstackclient\" (UID: \"b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7\") " pod="openstack/openstackclient" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.809065 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7\") " pod="openstack/openstackclient" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.809088 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj758\" (UniqueName: \"kubernetes.io/projected/b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7-kube-api-access-cj758\") pod \"openstackclient\" (UID: \"b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7\") " pod="openstack/openstackclient" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.810362 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7-openstack-config\") pod \"openstackclient\" (UID: \"b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7\") " pod="openstack/openstackclient" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.812930 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7-openstack-config-secret\") pod \"openstackclient\" (UID: \"b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7\") " pod="openstack/openstackclient" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.817593 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7\") " pod="openstack/openstackclient" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.826102 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj758\" (UniqueName: \"kubernetes.io/projected/b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7-kube-api-access-cj758\") pod \"openstackclient\" (UID: \"b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7\") " pod="openstack/openstackclient" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.928773 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b249aa27-98b1-40ce-85ab-5b7d0a8edf15" path="/var/lib/kubelet/pods/b249aa27-98b1-40ce-85ab-5b7d0a8edf15/volumes" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.930168 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa325b77-8734-4325-a644-e4b421e45843" path="/var/lib/kubelet/pods/fa325b77-8734-4325-a644-e4b421e45843/volumes" Nov 24 11:38:17 crc kubenswrapper[4678]: I1124 11:38:17.972225 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 11:38:18 crc kubenswrapper[4678]: I1124 11:38:18.222026 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 11:38:18 crc kubenswrapper[4678]: I1124 11:38:18.763034 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 11:38:19 crc kubenswrapper[4678]: I1124 11:38:19.032804 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ebf38af-2df6-49a3-8a00-37ff5996c82e","Type":"ContainerStarted","Data":"06a00b726e55af6e8dc9a1099fac90c06fc2a36f2f1deaf1fa80e9a28bde004d"} Nov 24 11:38:19 crc kubenswrapper[4678]: I1124 11:38:19.041869 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7","Type":"ContainerStarted","Data":"ad45c00cc5ddc0d660724ce94caf952c3c43d2f51a5bb717809691386ad3e394"} Nov 24 11:38:20 crc kubenswrapper[4678]: I1124 11:38:20.061343 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ebf38af-2df6-49a3-8a00-37ff5996c82e","Type":"ContainerStarted","Data":"f638dc37c4cb4cefdd80ecae4a6cbb462bd4a59b6ae85131490fe2135b02dfb5"} Nov 24 11:38:20 crc kubenswrapper[4678]: I1124 11:38:20.062085 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ebf38af-2df6-49a3-8a00-37ff5996c82e","Type":"ContainerStarted","Data":"c6bf345cc759a9844bbd586e0cfe3b6ce7c0639698776443b9c217c1fbe66de0"} Nov 24 11:38:20 crc kubenswrapper[4678]: I1124 11:38:20.094513 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.09448901 podStartE2EDuration="3.09448901s" podCreationTimestamp="2025-11-24 11:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:20.082414228 +0000 UTC m=+1311.013473867" watchObservedRunningTime="2025-11-24 11:38:20.09448901 +0000 UTC m=+1311.025548649" Nov 24 11:38:20 crc kubenswrapper[4678]: I1124 11:38:20.842994 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.358462 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-74f7b98495-b5gj8"] Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.362207 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.364358 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.364578 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.364683 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.415081 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-74f7b98495-b5gj8"] Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.467871 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-run-httpd\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.468055 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-combined-ca-bundle\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.468109 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-public-tls-certs\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.468174 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-config-data\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.468201 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-log-httpd\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.468224 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-internal-tls-certs\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.468449 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgl9l\" (UniqueName: \"kubernetes.io/projected/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-kube-api-access-hgl9l\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.468540 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-etc-swift\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.570578 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-combined-ca-bundle\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.570658 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-public-tls-certs\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.570743 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-config-data\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.570766 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-log-httpd\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.570780 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-internal-tls-certs\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.570810 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgl9l\" (UniqueName: \"kubernetes.io/projected/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-kube-api-access-hgl9l\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.570829 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-etc-swift\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.570874 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-run-httpd\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.571410 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-log-httpd\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.572104 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-run-httpd\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.578721 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-internal-tls-certs\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.579974 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-combined-ca-bundle\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.582309 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-public-tls-certs\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.582863 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-config-data\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.585961 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-etc-swift\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.591127 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgl9l\" (UniqueName: \"kubernetes.io/projected/95ada9de-2ac2-4ea9-9d4d-0ef4293da59f-kube-api-access-hgl9l\") pod \"swift-proxy-74f7b98495-b5gj8\" (UID: \"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f\") " pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:21 crc kubenswrapper[4678]: I1124 11:38:21.686874 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:22 crc kubenswrapper[4678]: I1124 11:38:22.304463 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-74f7b98495-b5gj8"] Nov 24 11:38:22 crc kubenswrapper[4678]: I1124 11:38:22.712850 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 11:38:23 crc kubenswrapper[4678]: I1124 11:38:23.131384 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-74f7b98495-b5gj8" event={"ID":"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f","Type":"ContainerStarted","Data":"c10df1c4d3d43ac60e058ba2340793ec0334ff70522d0ff5c2b173cc5676db40"} Nov 24 11:38:23 crc kubenswrapper[4678]: I1124 11:38:23.132217 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:23 crc kubenswrapper[4678]: I1124 11:38:23.132239 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-74f7b98495-b5gj8" event={"ID":"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f","Type":"ContainerStarted","Data":"79f8d961917817948e8e2f002306b9d0299fb7432ce2888f3ed5c999ad34f7ee"} Nov 24 11:38:23 crc kubenswrapper[4678]: I1124 11:38:23.132252 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-74f7b98495-b5gj8" event={"ID":"95ada9de-2ac2-4ea9-9d4d-0ef4293da59f","Type":"ContainerStarted","Data":"7cc009e1645aa8d9fceadf5d56edac2ba25762747ea7ac2701ee31061a483537"} Nov 24 11:38:23 crc kubenswrapper[4678]: I1124 11:38:23.159353 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-74f7b98495-b5gj8" podStartSLOduration=2.159332012 podStartE2EDuration="2.159332012s" podCreationTimestamp="2025-11-24 11:38:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:23.149177781 +0000 UTC m=+1314.080237490" watchObservedRunningTime="2025-11-24 11:38:23.159332012 +0000 UTC m=+1314.090391651" Nov 24 11:38:23 crc kubenswrapper[4678]: I1124 11:38:23.282934 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:38:23 crc kubenswrapper[4678]: I1124 11:38:23.283209 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="ceilometer-central-agent" containerID="cri-o://a3765922381eee06bd9373b751beef7e24af803256a4fa9caa96454b76fbcce7" gracePeriod=30 Nov 24 11:38:23 crc kubenswrapper[4678]: I1124 11:38:23.283660 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="proxy-httpd" containerID="cri-o://d16f5d67e70693f2d5bfe1da45ff7aa26c083105a581d3c7adf327f008b22548" gracePeriod=30 Nov 24 11:38:23 crc kubenswrapper[4678]: I1124 11:38:23.283737 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="sg-core" containerID="cri-o://e2b5bfe3d63d2ddb3c6a30ab3e324e5ac3211f9c16ce7130b9842c31e7e870ac" gracePeriod=30 Nov 24 11:38:23 crc kubenswrapper[4678]: I1124 11:38:23.283774 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="ceilometer-notification-agent" containerID="cri-o://1669000afb61c176dfad96e3665f4482d795f885089614888d8f7c4b8b4f9ec5" gracePeriod=30 Nov 24 11:38:23 crc kubenswrapper[4678]: I1124 11:38:23.303624 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.199:3000/\": EOF" Nov 24 11:38:24 crc kubenswrapper[4678]: I1124 11:38:24.180571 4678 generic.go:334] "Generic (PLEG): container finished" podID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerID="d16f5d67e70693f2d5bfe1da45ff7aa26c083105a581d3c7adf327f008b22548" exitCode=0 Nov 24 11:38:24 crc kubenswrapper[4678]: I1124 11:38:24.180923 4678 generic.go:334] "Generic (PLEG): container finished" podID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerID="e2b5bfe3d63d2ddb3c6a30ab3e324e5ac3211f9c16ce7130b9842c31e7e870ac" exitCode=2 Nov 24 11:38:24 crc kubenswrapper[4678]: I1124 11:38:24.180935 4678 generic.go:334] "Generic (PLEG): container finished" podID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerID="a3765922381eee06bd9373b751beef7e24af803256a4fa9caa96454b76fbcce7" exitCode=0 Nov 24 11:38:24 crc kubenswrapper[4678]: I1124 11:38:24.181499 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"257bbe91-8baa-435d-9caf-a4945285bfe7","Type":"ContainerDied","Data":"d16f5d67e70693f2d5bfe1da45ff7aa26c083105a581d3c7adf327f008b22548"} Nov 24 11:38:24 crc kubenswrapper[4678]: I1124 11:38:24.181571 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"257bbe91-8baa-435d-9caf-a4945285bfe7","Type":"ContainerDied","Data":"e2b5bfe3d63d2ddb3c6a30ab3e324e5ac3211f9c16ce7130b9842c31e7e870ac"} Nov 24 11:38:24 crc kubenswrapper[4678]: I1124 11:38:24.181584 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"257bbe91-8baa-435d-9caf-a4945285bfe7","Type":"ContainerDied","Data":"a3765922381eee06bd9373b751beef7e24af803256a4fa9caa96454b76fbcce7"} Nov 24 11:38:24 crc kubenswrapper[4678]: I1124 11:38:24.181627 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:25 crc kubenswrapper[4678]: I1124 11:38:25.925497 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.199:3000/\": dial tcp 10.217.0.199:3000: connect: connection refused" Nov 24 11:38:26 crc kubenswrapper[4678]: I1124 11:38:26.214040 4678 generic.go:334] "Generic (PLEG): container finished" podID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerID="1669000afb61c176dfad96e3665f4482d795f885089614888d8f7c4b8b4f9ec5" exitCode=0 Nov 24 11:38:26 crc kubenswrapper[4678]: I1124 11:38:26.214090 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"257bbe91-8baa-435d-9caf-a4945285bfe7","Type":"ContainerDied","Data":"1669000afb61c176dfad96e3665f4482d795f885089614888d8f7c4b8b4f9ec5"} Nov 24 11:38:27 crc kubenswrapper[4678]: I1124 11:38:27.978743 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.339530 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-fq7ll"] Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.342544 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fq7ll" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.361192 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fq7ll"] Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.447164 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-vfl9l"] Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.449011 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vfl9l" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.459034 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vfl9l"] Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.492878 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-244b-account-create-6jsxj"] Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.494266 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-244b-account-create-6jsxj" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.497794 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.514845 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-244b-account-create-6jsxj"] Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.526410 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc44c93e-8f06-48eb-a0a6-36a04e942702-operator-scripts\") pod \"nova-api-db-create-fq7ll\" (UID: \"fc44c93e-8f06-48eb-a0a6-36a04e942702\") " pod="openstack/nova-api-db-create-fq7ll" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.526866 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hss74\" (UniqueName: \"kubernetes.io/projected/fc44c93e-8f06-48eb-a0a6-36a04e942702-kube-api-access-hss74\") pod \"nova-api-db-create-fq7ll\" (UID: \"fc44c93e-8f06-48eb-a0a6-36a04e942702\") " pod="openstack/nova-api-db-create-fq7ll" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.628504 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hss74\" (UniqueName: \"kubernetes.io/projected/fc44c93e-8f06-48eb-a0a6-36a04e942702-kube-api-access-hss74\") pod \"nova-api-db-create-fq7ll\" (UID: \"fc44c93e-8f06-48eb-a0a6-36a04e942702\") " pod="openstack/nova-api-db-create-fq7ll" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.628564 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qxbx\" (UniqueName: \"kubernetes.io/projected/5223630b-272a-434b-83df-ef3915f58880-kube-api-access-8qxbx\") pod \"nova-api-244b-account-create-6jsxj\" (UID: \"5223630b-272a-434b-83df-ef3915f58880\") " pod="openstack/nova-api-244b-account-create-6jsxj" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.628665 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c34bab2-8d47-43e1-b367-8dd9b5c13c47-operator-scripts\") pod \"nova-cell0-db-create-vfl9l\" (UID: \"6c34bab2-8d47-43e1-b367-8dd9b5c13c47\") " pod="openstack/nova-cell0-db-create-vfl9l" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.628833 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc44c93e-8f06-48eb-a0a6-36a04e942702-operator-scripts\") pod \"nova-api-db-create-fq7ll\" (UID: \"fc44c93e-8f06-48eb-a0a6-36a04e942702\") " pod="openstack/nova-api-db-create-fq7ll" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.629009 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5223630b-272a-434b-83df-ef3915f58880-operator-scripts\") pod \"nova-api-244b-account-create-6jsxj\" (UID: \"5223630b-272a-434b-83df-ef3915f58880\") " pod="openstack/nova-api-244b-account-create-6jsxj" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.629128 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhgjd\" (UniqueName: \"kubernetes.io/projected/6c34bab2-8d47-43e1-b367-8dd9b5c13c47-kube-api-access-zhgjd\") pod \"nova-cell0-db-create-vfl9l\" (UID: \"6c34bab2-8d47-43e1-b367-8dd9b5c13c47\") " pod="openstack/nova-cell0-db-create-vfl9l" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.630519 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc44c93e-8f06-48eb-a0a6-36a04e942702-operator-scripts\") pod \"nova-api-db-create-fq7ll\" (UID: \"fc44c93e-8f06-48eb-a0a6-36a04e942702\") " pod="openstack/nova-api-db-create-fq7ll" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.646295 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-rn787"] Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.648008 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rn787" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.665047 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hss74\" (UniqueName: \"kubernetes.io/projected/fc44c93e-8f06-48eb-a0a6-36a04e942702-kube-api-access-hss74\") pod \"nova-api-db-create-fq7ll\" (UID: \"fc44c93e-8f06-48eb-a0a6-36a04e942702\") " pod="openstack/nova-api-db-create-fq7ll" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.666358 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rn787"] Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.680980 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fq7ll" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.687919 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-02a8-account-create-vkqwp"] Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.689796 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-02a8-account-create-vkqwp" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.692422 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.740369 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-02a8-account-create-vkqwp"] Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.741517 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5223630b-272a-434b-83df-ef3915f58880-operator-scripts\") pod \"nova-api-244b-account-create-6jsxj\" (UID: \"5223630b-272a-434b-83df-ef3915f58880\") " pod="openstack/nova-api-244b-account-create-6jsxj" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.741751 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhgjd\" (UniqueName: \"kubernetes.io/projected/6c34bab2-8d47-43e1-b367-8dd9b5c13c47-kube-api-access-zhgjd\") pod \"nova-cell0-db-create-vfl9l\" (UID: \"6c34bab2-8d47-43e1-b367-8dd9b5c13c47\") " pod="openstack/nova-cell0-db-create-vfl9l" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.741800 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qxbx\" (UniqueName: \"kubernetes.io/projected/5223630b-272a-434b-83df-ef3915f58880-kube-api-access-8qxbx\") pod \"nova-api-244b-account-create-6jsxj\" (UID: \"5223630b-272a-434b-83df-ef3915f58880\") " pod="openstack/nova-api-244b-account-create-6jsxj" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.742050 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c34bab2-8d47-43e1-b367-8dd9b5c13c47-operator-scripts\") pod \"nova-cell0-db-create-vfl9l\" (UID: \"6c34bab2-8d47-43e1-b367-8dd9b5c13c47\") " pod="openstack/nova-cell0-db-create-vfl9l" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.743053 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5223630b-272a-434b-83df-ef3915f58880-operator-scripts\") pod \"nova-api-244b-account-create-6jsxj\" (UID: \"5223630b-272a-434b-83df-ef3915f58880\") " pod="openstack/nova-api-244b-account-create-6jsxj" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.746100 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c34bab2-8d47-43e1-b367-8dd9b5c13c47-operator-scripts\") pod \"nova-cell0-db-create-vfl9l\" (UID: \"6c34bab2-8d47-43e1-b367-8dd9b5c13c47\") " pod="openstack/nova-cell0-db-create-vfl9l" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.763095 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhgjd\" (UniqueName: \"kubernetes.io/projected/6c34bab2-8d47-43e1-b367-8dd9b5c13c47-kube-api-access-zhgjd\") pod \"nova-cell0-db-create-vfl9l\" (UID: \"6c34bab2-8d47-43e1-b367-8dd9b5c13c47\") " pod="openstack/nova-cell0-db-create-vfl9l" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.767280 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qxbx\" (UniqueName: \"kubernetes.io/projected/5223630b-272a-434b-83df-ef3915f58880-kube-api-access-8qxbx\") pod \"nova-api-244b-account-create-6jsxj\" (UID: \"5223630b-272a-434b-83df-ef3915f58880\") " pod="openstack/nova-api-244b-account-create-6jsxj" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.784179 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vfl9l" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.816751 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-244b-account-create-6jsxj" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.848530 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/923a45a5-bc05-4472-b647-b280bec7618b-operator-scripts\") pod \"nova-cell1-db-create-rn787\" (UID: \"923a45a5-bc05-4472-b647-b280bec7618b\") " pod="openstack/nova-cell1-db-create-rn787" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.848604 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfc56\" (UniqueName: \"kubernetes.io/projected/4f2aa84a-6c99-44d4-b3e4-11756080a16a-kube-api-access-vfc56\") pod \"nova-cell0-02a8-account-create-vkqwp\" (UID: \"4f2aa84a-6c99-44d4-b3e4-11756080a16a\") " pod="openstack/nova-cell0-02a8-account-create-vkqwp" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.849813 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tmxz\" (UniqueName: \"kubernetes.io/projected/923a45a5-bc05-4472-b647-b280bec7618b-kube-api-access-6tmxz\") pod \"nova-cell1-db-create-rn787\" (UID: \"923a45a5-bc05-4472-b647-b280bec7618b\") " pod="openstack/nova-cell1-db-create-rn787" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.849928 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f2aa84a-6c99-44d4-b3e4-11756080a16a-operator-scripts\") pod \"nova-cell0-02a8-account-create-vkqwp\" (UID: \"4f2aa84a-6c99-44d4-b3e4-11756080a16a\") " pod="openstack/nova-cell0-02a8-account-create-vkqwp" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.857465 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-abe0-account-create-7rbmj"] Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.859140 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-abe0-account-create-7rbmj" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.861895 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.870390 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-abe0-account-create-7rbmj"] Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.952311 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tmxz\" (UniqueName: \"kubernetes.io/projected/923a45a5-bc05-4472-b647-b280bec7618b-kube-api-access-6tmxz\") pod \"nova-cell1-db-create-rn787\" (UID: \"923a45a5-bc05-4472-b647-b280bec7618b\") " pod="openstack/nova-cell1-db-create-rn787" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.952609 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f2aa84a-6c99-44d4-b3e4-11756080a16a-operator-scripts\") pod \"nova-cell0-02a8-account-create-vkqwp\" (UID: \"4f2aa84a-6c99-44d4-b3e4-11756080a16a\") " pod="openstack/nova-cell0-02a8-account-create-vkqwp" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.952880 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/923a45a5-bc05-4472-b647-b280bec7618b-operator-scripts\") pod \"nova-cell1-db-create-rn787\" (UID: \"923a45a5-bc05-4472-b647-b280bec7618b\") " pod="openstack/nova-cell1-db-create-rn787" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.953027 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfc56\" (UniqueName: \"kubernetes.io/projected/4f2aa84a-6c99-44d4-b3e4-11756080a16a-kube-api-access-vfc56\") pod \"nova-cell0-02a8-account-create-vkqwp\" (UID: \"4f2aa84a-6c99-44d4-b3e4-11756080a16a\") " pod="openstack/nova-cell0-02a8-account-create-vkqwp" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.957056 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f2aa84a-6c99-44d4-b3e4-11756080a16a-operator-scripts\") pod \"nova-cell0-02a8-account-create-vkqwp\" (UID: \"4f2aa84a-6c99-44d4-b3e4-11756080a16a\") " pod="openstack/nova-cell0-02a8-account-create-vkqwp" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.957350 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/923a45a5-bc05-4472-b647-b280bec7618b-operator-scripts\") pod \"nova-cell1-db-create-rn787\" (UID: \"923a45a5-bc05-4472-b647-b280bec7618b\") " pod="openstack/nova-cell1-db-create-rn787" Nov 24 11:38:30 crc kubenswrapper[4678]: I1124 11:38:30.997846 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfc56\" (UniqueName: \"kubernetes.io/projected/4f2aa84a-6c99-44d4-b3e4-11756080a16a-kube-api-access-vfc56\") pod \"nova-cell0-02a8-account-create-vkqwp\" (UID: \"4f2aa84a-6c99-44d4-b3e4-11756080a16a\") " pod="openstack/nova-cell0-02a8-account-create-vkqwp" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.000369 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tmxz\" (UniqueName: \"kubernetes.io/projected/923a45a5-bc05-4472-b647-b280bec7618b-kube-api-access-6tmxz\") pod \"nova-cell1-db-create-rn787\" (UID: \"923a45a5-bc05-4472-b647-b280bec7618b\") " pod="openstack/nova-cell1-db-create-rn787" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.030237 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rn787" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.058121 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d7fs\" (UniqueName: \"kubernetes.io/projected/e34ca05d-7673-435b-a6e6-0d775765472c-kube-api-access-4d7fs\") pod \"nova-cell1-abe0-account-create-7rbmj\" (UID: \"e34ca05d-7673-435b-a6e6-0d775765472c\") " pod="openstack/nova-cell1-abe0-account-create-7rbmj" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.058229 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e34ca05d-7673-435b-a6e6-0d775765472c-operator-scripts\") pod \"nova-cell1-abe0-account-create-7rbmj\" (UID: \"e34ca05d-7673-435b-a6e6-0d775765472c\") " pod="openstack/nova-cell1-abe0-account-create-7rbmj" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.063014 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-02a8-account-create-vkqwp" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.161234 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d7fs\" (UniqueName: \"kubernetes.io/projected/e34ca05d-7673-435b-a6e6-0d775765472c-kube-api-access-4d7fs\") pod \"nova-cell1-abe0-account-create-7rbmj\" (UID: \"e34ca05d-7673-435b-a6e6-0d775765472c\") " pod="openstack/nova-cell1-abe0-account-create-7rbmj" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.161542 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e34ca05d-7673-435b-a6e6-0d775765472c-operator-scripts\") pod \"nova-cell1-abe0-account-create-7rbmj\" (UID: \"e34ca05d-7673-435b-a6e6-0d775765472c\") " pod="openstack/nova-cell1-abe0-account-create-7rbmj" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.162350 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e34ca05d-7673-435b-a6e6-0d775765472c-operator-scripts\") pod \"nova-cell1-abe0-account-create-7rbmj\" (UID: \"e34ca05d-7673-435b-a6e6-0d775765472c\") " pod="openstack/nova-cell1-abe0-account-create-7rbmj" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.186815 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d7fs\" (UniqueName: \"kubernetes.io/projected/e34ca05d-7673-435b-a6e6-0d775765472c-kube-api-access-4d7fs\") pod \"nova-cell1-abe0-account-create-7rbmj\" (UID: \"e34ca05d-7673-435b-a6e6-0d775765472c\") " pod="openstack/nova-cell1-abe0-account-create-7rbmj" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.231088 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-abe0-account-create-7rbmj" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.424226 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.582226 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-scripts\") pod \"257bbe91-8baa-435d-9caf-a4945285bfe7\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.582603 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/257bbe91-8baa-435d-9caf-a4945285bfe7-run-httpd\") pod \"257bbe91-8baa-435d-9caf-a4945285bfe7\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.582801 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-sg-core-conf-yaml\") pod \"257bbe91-8baa-435d-9caf-a4945285bfe7\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.582855 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-config-data\") pod \"257bbe91-8baa-435d-9caf-a4945285bfe7\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.582898 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j44vw\" (UniqueName: \"kubernetes.io/projected/257bbe91-8baa-435d-9caf-a4945285bfe7-kube-api-access-j44vw\") pod \"257bbe91-8baa-435d-9caf-a4945285bfe7\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.582958 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-combined-ca-bundle\") pod \"257bbe91-8baa-435d-9caf-a4945285bfe7\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.583089 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/257bbe91-8baa-435d-9caf-a4945285bfe7-log-httpd\") pod \"257bbe91-8baa-435d-9caf-a4945285bfe7\" (UID: \"257bbe91-8baa-435d-9caf-a4945285bfe7\") " Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.586905 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257bbe91-8baa-435d-9caf-a4945285bfe7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "257bbe91-8baa-435d-9caf-a4945285bfe7" (UID: "257bbe91-8baa-435d-9caf-a4945285bfe7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.589024 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257bbe91-8baa-435d-9caf-a4945285bfe7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "257bbe91-8baa-435d-9caf-a4945285bfe7" (UID: "257bbe91-8baa-435d-9caf-a4945285bfe7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.606804 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257bbe91-8baa-435d-9caf-a4945285bfe7-kube-api-access-j44vw" (OuterVolumeSpecName: "kube-api-access-j44vw") pod "257bbe91-8baa-435d-9caf-a4945285bfe7" (UID: "257bbe91-8baa-435d-9caf-a4945285bfe7"). InnerVolumeSpecName "kube-api-access-j44vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.612788 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-scripts" (OuterVolumeSpecName: "scripts") pod "257bbe91-8baa-435d-9caf-a4945285bfe7" (UID: "257bbe91-8baa-435d-9caf-a4945285bfe7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.687819 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "257bbe91-8baa-435d-9caf-a4945285bfe7" (UID: "257bbe91-8baa-435d-9caf-a4945285bfe7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.693859 4678 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.706769 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j44vw\" (UniqueName: \"kubernetes.io/projected/257bbe91-8baa-435d-9caf-a4945285bfe7-kube-api-access-j44vw\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.706786 4678 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/257bbe91-8baa-435d-9caf-a4945285bfe7-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.706799 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.706809 4678 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/257bbe91-8baa-435d-9caf-a4945285bfe7-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.716397 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.722163 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-74f7b98495-b5gj8" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.743024 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "257bbe91-8baa-435d-9caf-a4945285bfe7" (UID: "257bbe91-8baa-435d-9caf-a4945285bfe7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.809430 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.851896 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-config-data" (OuterVolumeSpecName: "config-data") pod "257bbe91-8baa-435d-9caf-a4945285bfe7" (UID: "257bbe91-8baa-435d-9caf-a4945285bfe7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.854288 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-244b-account-create-6jsxj"] Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.878730 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fq7ll"] Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.910589 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/257bbe91-8baa-435d-9caf-a4945285bfe7-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.931128 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vfl9l"] Nov 24 11:38:31 crc kubenswrapper[4678]: I1124 11:38:31.931165 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rn787"] Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.009227 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-02a8-account-create-vkqwp"] Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.033729 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-abe0-account-create-7rbmj"] Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.340025 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7","Type":"ContainerStarted","Data":"dbc316e1d0eee203bf803ee9cb87e082581a4347d7bb2dfdc460f93172f021b7"} Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.356784 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vfl9l" event={"ID":"6c34bab2-8d47-43e1-b367-8dd9b5c13c47","Type":"ContainerStarted","Data":"d94c3a26a2b1f0f0e2cf372040f1bd2e2eeba39a52880e2f3a5f33fc2e9656c9"} Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.356828 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vfl9l" event={"ID":"6c34bab2-8d47-43e1-b367-8dd9b5c13c47","Type":"ContainerStarted","Data":"f442bc2ae24f12fc9124fb3677778d2c82b63cf1e5276079d9db2dd9096d1330"} Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.380147 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"257bbe91-8baa-435d-9caf-a4945285bfe7","Type":"ContainerDied","Data":"ba37ab0e06262641c8526426dfbba121e3117ac2e052d30c28588888e65eb7eb"} Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.380213 4678 scope.go:117] "RemoveContainer" containerID="d16f5d67e70693f2d5bfe1da45ff7aa26c083105a581d3c7adf327f008b22548" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.380392 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.381140 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.107735594 podStartE2EDuration="15.381116116s" podCreationTimestamp="2025-11-24 11:38:17 +0000 UTC" firstStartedPulling="2025-11-24 11:38:18.7892133 +0000 UTC m=+1309.720272939" lastFinishedPulling="2025-11-24 11:38:31.062593822 +0000 UTC m=+1321.993653461" observedRunningTime="2025-11-24 11:38:32.361864332 +0000 UTC m=+1323.292923971" watchObservedRunningTime="2025-11-24 11:38:32.381116116 +0000 UTC m=+1323.312175755" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.391318 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-02a8-account-create-vkqwp" event={"ID":"4f2aa84a-6c99-44d4-b3e4-11756080a16a","Type":"ContainerStarted","Data":"600dc815118417d7065c40a23da0a2ca88c4fa01e72e4595e26f3f01ddda7da0"} Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.391347 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-vfl9l" podStartSLOduration=2.39132918 podStartE2EDuration="2.39132918s" podCreationTimestamp="2025-11-24 11:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:32.377914501 +0000 UTC m=+1323.308974140" watchObservedRunningTime="2025-11-24 11:38:32.39132918 +0000 UTC m=+1323.322388819" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.402416 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-abe0-account-create-7rbmj" event={"ID":"e34ca05d-7673-435b-a6e6-0d775765472c","Type":"ContainerStarted","Data":"0a8deddf41465dc2d736d3df77a92f4ea6b9171310fa468b94f816dae4a57bc3"} Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.424212 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rn787" event={"ID":"923a45a5-bc05-4472-b647-b280bec7618b","Type":"ContainerStarted","Data":"83eaa8ff307be17cae8e75b813fa048d62d6b6ef64c59edcaf87cd99fb15ca17"} Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.429298 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-02a8-account-create-vkqwp" podStartSLOduration=2.429279764 podStartE2EDuration="2.429279764s" podCreationTimestamp="2025-11-24 11:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:32.406395022 +0000 UTC m=+1323.337454671" watchObservedRunningTime="2025-11-24 11:38:32.429279764 +0000 UTC m=+1323.360339403" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.454998 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.458796 4678 scope.go:117] "RemoveContainer" containerID="e2b5bfe3d63d2ddb3c6a30ab3e324e5ac3211f9c16ce7130b9842c31e7e870ac" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.459000 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-244b-account-create-6jsxj" event={"ID":"5223630b-272a-434b-83df-ef3915f58880","Type":"ContainerStarted","Data":"c72e46814877bd7631836b7f6611cfdf6281ff5512e74b1c16a5d1f956ac0f00"} Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.459043 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-244b-account-create-6jsxj" event={"ID":"5223630b-272a-434b-83df-ef3915f58880","Type":"ContainerStarted","Data":"6fbaec892588058364384f88886efdf96297a3156a6ea6d9716a7d21ac2e4977"} Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.465914 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.473539 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-abe0-account-create-7rbmj" podStartSLOduration=2.473520495 podStartE2EDuration="2.473520495s" podCreationTimestamp="2025-11-24 11:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:32.436599069 +0000 UTC m=+1323.367658708" watchObservedRunningTime="2025-11-24 11:38:32.473520495 +0000 UTC m=+1323.404580134" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.475625 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fq7ll" event={"ID":"fc44c93e-8f06-48eb-a0a6-36a04e942702","Type":"ContainerStarted","Data":"85ac138c0a01934354e4f66fab67b49c448925c029f41568d4c013ca444f1398"} Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.475685 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fq7ll" event={"ID":"fc44c93e-8f06-48eb-a0a6-36a04e942702","Type":"ContainerStarted","Data":"895056274e8f2964f2801c1f025e41191098e945669a5202c296d4248f1d3e38"} Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.485980 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:38:32 crc kubenswrapper[4678]: E1124 11:38:32.487630 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="ceilometer-central-agent" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.487643 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="ceilometer-central-agent" Nov 24 11:38:32 crc kubenswrapper[4678]: E1124 11:38:32.487688 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="ceilometer-notification-agent" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.487696 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="ceilometer-notification-agent" Nov 24 11:38:32 crc kubenswrapper[4678]: E1124 11:38:32.487728 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="proxy-httpd" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.487735 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="proxy-httpd" Nov 24 11:38:32 crc kubenswrapper[4678]: E1124 11:38:32.487748 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="sg-core" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.489295 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="sg-core" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.489553 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="ceilometer-notification-agent" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.489585 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="sg-core" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.489596 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="ceilometer-central-agent" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.489613 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" containerName="proxy-httpd" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.492550 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-rn787" podStartSLOduration=2.492531614 podStartE2EDuration="2.492531614s" podCreationTimestamp="2025-11-24 11:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:32.461613168 +0000 UTC m=+1323.392672817" watchObservedRunningTime="2025-11-24 11:38:32.492531614 +0000 UTC m=+1323.423591253" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.495692 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.495859 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.498376 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.504917 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.514644 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-244b-account-create-6jsxj" podStartSLOduration=2.5146269439999998 podStartE2EDuration="2.514626944s" podCreationTimestamp="2025-11-24 11:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:32.485619289 +0000 UTC m=+1323.416678938" watchObservedRunningTime="2025-11-24 11:38:32.514626944 +0000 UTC m=+1323.445686583" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.531857 4678 scope.go:117] "RemoveContainer" containerID="1669000afb61c176dfad96e3665f4482d795f885089614888d8f7c4b8b4f9ec5" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.533269 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk8nk\" (UniqueName: \"kubernetes.io/projected/a0428bc2-7f90-4d19-86d4-ce0a69513a88-kube-api-access-hk8nk\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.533378 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0428bc2-7f90-4d19-86d4-ce0a69513a88-log-httpd\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.533428 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.533471 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-config-data\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.533561 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.533600 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0428bc2-7f90-4d19-86d4-ce0a69513a88-run-httpd\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.533622 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-scripts\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.536760 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-fq7ll" podStartSLOduration=2.536740036 podStartE2EDuration="2.536740036s" podCreationTimestamp="2025-11-24 11:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:32.504781541 +0000 UTC m=+1323.435841190" watchObservedRunningTime="2025-11-24 11:38:32.536740036 +0000 UTC m=+1323.467799675" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.563263 4678 scope.go:117] "RemoveContainer" containerID="a3765922381eee06bd9373b751beef7e24af803256a4fa9caa96454b76fbcce7" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.638072 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.638146 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-config-data\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.638218 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.638251 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0428bc2-7f90-4d19-86d4-ce0a69513a88-run-httpd\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.638269 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-scripts\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.638333 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk8nk\" (UniqueName: \"kubernetes.io/projected/a0428bc2-7f90-4d19-86d4-ce0a69513a88-kube-api-access-hk8nk\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.638399 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0428bc2-7f90-4d19-86d4-ce0a69513a88-log-httpd\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.639516 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0428bc2-7f90-4d19-86d4-ce0a69513a88-log-httpd\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.639923 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0428bc2-7f90-4d19-86d4-ce0a69513a88-run-httpd\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.649564 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.649649 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.652450 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-scripts\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.660878 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-config-data\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.663339 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk8nk\" (UniqueName: \"kubernetes.io/projected/a0428bc2-7f90-4d19-86d4-ce0a69513a88-kube-api-access-hk8nk\") pod \"ceilometer-0\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " pod="openstack/ceilometer-0" Nov 24 11:38:32 crc kubenswrapper[4678]: I1124 11:38:32.833026 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:38:33 crc kubenswrapper[4678]: W1124 11:38:33.367820 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0428bc2_7f90_4d19_86d4_ce0a69513a88.slice/crio-153c71a14becd316428da11cbcba3b075c365e25dbf2af526ad022ad285b35f2 WatchSource:0}: Error finding container 153c71a14becd316428da11cbcba3b075c365e25dbf2af526ad022ad285b35f2: Status 404 returned error can't find the container with id 153c71a14becd316428da11cbcba3b075c365e25dbf2af526ad022ad285b35f2 Nov 24 11:38:33 crc kubenswrapper[4678]: I1124 11:38:33.371743 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:38:33 crc kubenswrapper[4678]: I1124 11:38:33.486843 4678 generic.go:334] "Generic (PLEG): container finished" podID="fc44c93e-8f06-48eb-a0a6-36a04e942702" containerID="85ac138c0a01934354e4f66fab67b49c448925c029f41568d4c013ca444f1398" exitCode=0 Nov 24 11:38:33 crc kubenswrapper[4678]: I1124 11:38:33.486918 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fq7ll" event={"ID":"fc44c93e-8f06-48eb-a0a6-36a04e942702","Type":"ContainerDied","Data":"85ac138c0a01934354e4f66fab67b49c448925c029f41568d4c013ca444f1398"} Nov 24 11:38:33 crc kubenswrapper[4678]: I1124 11:38:33.489030 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0428bc2-7f90-4d19-86d4-ce0a69513a88","Type":"ContainerStarted","Data":"153c71a14becd316428da11cbcba3b075c365e25dbf2af526ad022ad285b35f2"} Nov 24 11:38:33 crc kubenswrapper[4678]: I1124 11:38:33.490282 4678 generic.go:334] "Generic (PLEG): container finished" podID="6c34bab2-8d47-43e1-b367-8dd9b5c13c47" containerID="d94c3a26a2b1f0f0e2cf372040f1bd2e2eeba39a52880e2f3a5f33fc2e9656c9" exitCode=0 Nov 24 11:38:33 crc kubenswrapper[4678]: I1124 11:38:33.490351 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vfl9l" event={"ID":"6c34bab2-8d47-43e1-b367-8dd9b5c13c47","Type":"ContainerDied","Data":"d94c3a26a2b1f0f0e2cf372040f1bd2e2eeba39a52880e2f3a5f33fc2e9656c9"} Nov 24 11:38:33 crc kubenswrapper[4678]: I1124 11:38:33.492575 4678 generic.go:334] "Generic (PLEG): container finished" podID="4f2aa84a-6c99-44d4-b3e4-11756080a16a" containerID="889857007c31d2f59ceb7da9ed01ac8cc91dbe1611cb1a00c4f1a4bf347c07bc" exitCode=0 Nov 24 11:38:33 crc kubenswrapper[4678]: I1124 11:38:33.492639 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-02a8-account-create-vkqwp" event={"ID":"4f2aa84a-6c99-44d4-b3e4-11756080a16a","Type":"ContainerDied","Data":"889857007c31d2f59ceb7da9ed01ac8cc91dbe1611cb1a00c4f1a4bf347c07bc"} Nov 24 11:38:33 crc kubenswrapper[4678]: I1124 11:38:33.493958 4678 generic.go:334] "Generic (PLEG): container finished" podID="e34ca05d-7673-435b-a6e6-0d775765472c" containerID="d16f80ed63ce8416a7c4129769046e29ceeb8fa909d6601c2e275d63a7ae7143" exitCode=0 Nov 24 11:38:33 crc kubenswrapper[4678]: I1124 11:38:33.494003 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-abe0-account-create-7rbmj" event={"ID":"e34ca05d-7673-435b-a6e6-0d775765472c","Type":"ContainerDied","Data":"d16f80ed63ce8416a7c4129769046e29ceeb8fa909d6601c2e275d63a7ae7143"} Nov 24 11:38:33 crc kubenswrapper[4678]: I1124 11:38:33.495189 4678 generic.go:334] "Generic (PLEG): container finished" podID="923a45a5-bc05-4472-b647-b280bec7618b" containerID="a1f7f0825848dbfc1982da57355cc1324c2ad6611a4b2b9f8a3ef589d72c92ed" exitCode=0 Nov 24 11:38:33 crc kubenswrapper[4678]: I1124 11:38:33.495293 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rn787" event={"ID":"923a45a5-bc05-4472-b647-b280bec7618b","Type":"ContainerDied","Data":"a1f7f0825848dbfc1982da57355cc1324c2ad6611a4b2b9f8a3ef589d72c92ed"} Nov 24 11:38:33 crc kubenswrapper[4678]: I1124 11:38:33.496686 4678 generic.go:334] "Generic (PLEG): container finished" podID="5223630b-272a-434b-83df-ef3915f58880" containerID="c72e46814877bd7631836b7f6611cfdf6281ff5512e74b1c16a5d1f956ac0f00" exitCode=0 Nov 24 11:38:33 crc kubenswrapper[4678]: I1124 11:38:33.496800 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-244b-account-create-6jsxj" event={"ID":"5223630b-272a-434b-83df-ef3915f58880","Type":"ContainerDied","Data":"c72e46814877bd7631836b7f6611cfdf6281ff5512e74b1c16a5d1f956ac0f00"} Nov 24 11:38:33 crc kubenswrapper[4678]: I1124 11:38:33.915874 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257bbe91-8baa-435d-9caf-a4945285bfe7" path="/var/lib/kubelet/pods/257bbe91-8baa-435d-9caf-a4945285bfe7/volumes" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.512085 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0428bc2-7f90-4d19-86d4-ce0a69513a88","Type":"ContainerStarted","Data":"dcc4622309f8ff43d19b657569aebfc671d5ddfc3e3d6bc7c81a82ab4f0cb082"} Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.556491 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5b6d798f4-7gdft"] Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.558644 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.562422 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.562708 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.562882 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-9xsmw" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.575960 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5b6d798f4-7gdft"] Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.677608 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-2w5tz"] Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.679301 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhb5z\" (UniqueName: \"kubernetes.io/projected/59630821-44d7-4a76-873f-45ea27649b05-kube-api-access-hhb5z\") pod \"heat-engine-5b6d798f4-7gdft\" (UID: \"59630821-44d7-4a76-873f-45ea27649b05\") " pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.679954 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-combined-ca-bundle\") pod \"heat-engine-5b6d798f4-7gdft\" (UID: \"59630821-44d7-4a76-873f-45ea27649b05\") " pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.680240 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-config-data-custom\") pod \"heat-engine-5b6d798f4-7gdft\" (UID: \"59630821-44d7-4a76-873f-45ea27649b05\") " pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.680354 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-config-data\") pod \"heat-engine-5b6d798f4-7gdft\" (UID: \"59630821-44d7-4a76-873f-45ea27649b05\") " pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.681326 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.767802 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-2w5tz"] Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.782169 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.782226 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvp9x\" (UniqueName: \"kubernetes.io/projected/a189e45a-e15e-4a3b-b5de-3f0608b38f13-kube-api-access-tvp9x\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.782259 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhb5z\" (UniqueName: \"kubernetes.io/projected/59630821-44d7-4a76-873f-45ea27649b05-kube-api-access-hhb5z\") pod \"heat-engine-5b6d798f4-7gdft\" (UID: \"59630821-44d7-4a76-873f-45ea27649b05\") " pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.782309 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-combined-ca-bundle\") pod \"heat-engine-5b6d798f4-7gdft\" (UID: \"59630821-44d7-4a76-873f-45ea27649b05\") " pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.782335 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.782373 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.782393 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-config\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.782445 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-config-data-custom\") pod \"heat-engine-5b6d798f4-7gdft\" (UID: \"59630821-44d7-4a76-873f-45ea27649b05\") " pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.782485 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.782509 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-config-data\") pod \"heat-engine-5b6d798f4-7gdft\" (UID: \"59630821-44d7-4a76-873f-45ea27649b05\") " pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.788519 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-combined-ca-bundle\") pod \"heat-engine-5b6d798f4-7gdft\" (UID: \"59630821-44d7-4a76-873f-45ea27649b05\") " pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.817517 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-config-data\") pod \"heat-engine-5b6d798f4-7gdft\" (UID: \"59630821-44d7-4a76-873f-45ea27649b05\") " pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.817581 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-config-data-custom\") pod \"heat-engine-5b6d798f4-7gdft\" (UID: \"59630821-44d7-4a76-873f-45ea27649b05\") " pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.845347 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhb5z\" (UniqueName: \"kubernetes.io/projected/59630821-44d7-4a76-873f-45ea27649b05-kube-api-access-hhb5z\") pod \"heat-engine-5b6d798f4-7gdft\" (UID: \"59630821-44d7-4a76-873f-45ea27649b05\") " pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.884409 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.884497 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.884530 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-config\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.884636 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.884738 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.884777 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvp9x\" (UniqueName: \"kubernetes.io/projected/a189e45a-e15e-4a3b-b5de-3f0608b38f13-kube-api-access-tvp9x\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.892002 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.892775 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.897281 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.897528 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-config\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.897982 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.898162 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.913910 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6b58dbb476-qzjrl"] Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.914624 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvp9x\" (UniqueName: \"kubernetes.io/projected/a189e45a-e15e-4a3b-b5de-3f0608b38f13-kube-api-access-tvp9x\") pod \"dnsmasq-dns-f6bc4c6c9-2w5tz\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.916092 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.921209 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.933763 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6694596475-t2mb7"] Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.936617 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.941085 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.945256 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b58dbb476-qzjrl"] Nov 24 11:38:34 crc kubenswrapper[4678]: I1124 11:38:34.960884 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6694596475-t2mb7"] Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.029301 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.117801 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-combined-ca-bundle\") pod \"heat-api-6694596475-t2mb7\" (UID: \"e5736f93-57bc-4f43-a09e-7f417d8397b0\") " pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.120262 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2jh7\" (UniqueName: \"kubernetes.io/projected/e5736f93-57bc-4f43-a09e-7f417d8397b0-kube-api-access-x2jh7\") pod \"heat-api-6694596475-t2mb7\" (UID: \"e5736f93-57bc-4f43-a09e-7f417d8397b0\") " pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.120384 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-config-data\") pod \"heat-cfnapi-6b58dbb476-qzjrl\" (UID: \"2a2a6860-a011-4427-bd09-bd77fe038151\") " pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.120597 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-config-data-custom\") pod \"heat-cfnapi-6b58dbb476-qzjrl\" (UID: \"2a2a6860-a011-4427-bd09-bd77fe038151\") " pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.120737 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twxd4\" (UniqueName: \"kubernetes.io/projected/2a2a6860-a011-4427-bd09-bd77fe038151-kube-api-access-twxd4\") pod \"heat-cfnapi-6b58dbb476-qzjrl\" (UID: \"2a2a6860-a011-4427-bd09-bd77fe038151\") " pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.120785 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-combined-ca-bundle\") pod \"heat-cfnapi-6b58dbb476-qzjrl\" (UID: \"2a2a6860-a011-4427-bd09-bd77fe038151\") " pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.121069 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-config-data-custom\") pod \"heat-api-6694596475-t2mb7\" (UID: \"e5736f93-57bc-4f43-a09e-7f417d8397b0\") " pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.121246 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-config-data\") pod \"heat-api-6694596475-t2mb7\" (UID: \"e5736f93-57bc-4f43-a09e-7f417d8397b0\") " pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.228129 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-config-data-custom\") pod \"heat-cfnapi-6b58dbb476-qzjrl\" (UID: \"2a2a6860-a011-4427-bd09-bd77fe038151\") " pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.228285 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twxd4\" (UniqueName: \"kubernetes.io/projected/2a2a6860-a011-4427-bd09-bd77fe038151-kube-api-access-twxd4\") pod \"heat-cfnapi-6b58dbb476-qzjrl\" (UID: \"2a2a6860-a011-4427-bd09-bd77fe038151\") " pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.228567 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-combined-ca-bundle\") pod \"heat-cfnapi-6b58dbb476-qzjrl\" (UID: \"2a2a6860-a011-4427-bd09-bd77fe038151\") " pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.228649 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-config-data-custom\") pod \"heat-api-6694596475-t2mb7\" (UID: \"e5736f93-57bc-4f43-a09e-7f417d8397b0\") " pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.228749 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-config-data\") pod \"heat-api-6694596475-t2mb7\" (UID: \"e5736f93-57bc-4f43-a09e-7f417d8397b0\") " pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.228841 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-combined-ca-bundle\") pod \"heat-api-6694596475-t2mb7\" (UID: \"e5736f93-57bc-4f43-a09e-7f417d8397b0\") " pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.228888 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2jh7\" (UniqueName: \"kubernetes.io/projected/e5736f93-57bc-4f43-a09e-7f417d8397b0-kube-api-access-x2jh7\") pod \"heat-api-6694596475-t2mb7\" (UID: \"e5736f93-57bc-4f43-a09e-7f417d8397b0\") " pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.230182 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-config-data\") pod \"heat-cfnapi-6b58dbb476-qzjrl\" (UID: \"2a2a6860-a011-4427-bd09-bd77fe038151\") " pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.235374 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-config-data-custom\") pod \"heat-api-6694596475-t2mb7\" (UID: \"e5736f93-57bc-4f43-a09e-7f417d8397b0\") " pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.236097 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-config-data\") pod \"heat-api-6694596475-t2mb7\" (UID: \"e5736f93-57bc-4f43-a09e-7f417d8397b0\") " pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.238312 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-config-data-custom\") pod \"heat-cfnapi-6b58dbb476-qzjrl\" (UID: \"2a2a6860-a011-4427-bd09-bd77fe038151\") " pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.239028 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-config-data\") pod \"heat-cfnapi-6b58dbb476-qzjrl\" (UID: \"2a2a6860-a011-4427-bd09-bd77fe038151\") " pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.239772 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-combined-ca-bundle\") pod \"heat-api-6694596475-t2mb7\" (UID: \"e5736f93-57bc-4f43-a09e-7f417d8397b0\") " pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.240865 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-combined-ca-bundle\") pod \"heat-cfnapi-6b58dbb476-qzjrl\" (UID: \"2a2a6860-a011-4427-bd09-bd77fe038151\") " pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.251583 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twxd4\" (UniqueName: \"kubernetes.io/projected/2a2a6860-a011-4427-bd09-bd77fe038151-kube-api-access-twxd4\") pod \"heat-cfnapi-6b58dbb476-qzjrl\" (UID: \"2a2a6860-a011-4427-bd09-bd77fe038151\") " pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.257161 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2jh7\" (UniqueName: \"kubernetes.io/projected/e5736f93-57bc-4f43-a09e-7f417d8397b0-kube-api-access-x2jh7\") pod \"heat-api-6694596475-t2mb7\" (UID: \"e5736f93-57bc-4f43-a09e-7f417d8397b0\") " pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.418418 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.445777 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:35 crc kubenswrapper[4678]: I1124 11:38:35.598780 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0428bc2-7f90-4d19-86d4-ce0a69513a88","Type":"ContainerStarted","Data":"ca1ec9c3fe7014f24c57e8025a99396b6d68fd570e52c0ae3f711f7488ac0ba6"} Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.226319 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-abe0-account-create-7rbmj" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.271716 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-244b-account-create-6jsxj" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.280487 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vfl9l" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.302578 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-02a8-account-create-vkqwp" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.308554 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rn787" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.338597 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fq7ll" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.397833 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d7fs\" (UniqueName: \"kubernetes.io/projected/e34ca05d-7673-435b-a6e6-0d775765472c-kube-api-access-4d7fs\") pod \"e34ca05d-7673-435b-a6e6-0d775765472c\" (UID: \"e34ca05d-7673-435b-a6e6-0d775765472c\") " Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.398190 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhgjd\" (UniqueName: \"kubernetes.io/projected/6c34bab2-8d47-43e1-b367-8dd9b5c13c47-kube-api-access-zhgjd\") pod \"6c34bab2-8d47-43e1-b367-8dd9b5c13c47\" (UID: \"6c34bab2-8d47-43e1-b367-8dd9b5c13c47\") " Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.398229 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c34bab2-8d47-43e1-b367-8dd9b5c13c47-operator-scripts\") pod \"6c34bab2-8d47-43e1-b367-8dd9b5c13c47\" (UID: \"6c34bab2-8d47-43e1-b367-8dd9b5c13c47\") " Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.398290 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e34ca05d-7673-435b-a6e6-0d775765472c-operator-scripts\") pod \"e34ca05d-7673-435b-a6e6-0d775765472c\" (UID: \"e34ca05d-7673-435b-a6e6-0d775765472c\") " Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.398426 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qxbx\" (UniqueName: \"kubernetes.io/projected/5223630b-272a-434b-83df-ef3915f58880-kube-api-access-8qxbx\") pod \"5223630b-272a-434b-83df-ef3915f58880\" (UID: \"5223630b-272a-434b-83df-ef3915f58880\") " Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.398474 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5223630b-272a-434b-83df-ef3915f58880-operator-scripts\") pod \"5223630b-272a-434b-83df-ef3915f58880\" (UID: \"5223630b-272a-434b-83df-ef3915f58880\") " Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.400762 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c34bab2-8d47-43e1-b367-8dd9b5c13c47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c34bab2-8d47-43e1-b367-8dd9b5c13c47" (UID: "6c34bab2-8d47-43e1-b367-8dd9b5c13c47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.401514 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5223630b-272a-434b-83df-ef3915f58880-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5223630b-272a-434b-83df-ef3915f58880" (UID: "5223630b-272a-434b-83df-ef3915f58880"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.421886 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e34ca05d-7673-435b-a6e6-0d775765472c-kube-api-access-4d7fs" (OuterVolumeSpecName: "kube-api-access-4d7fs") pod "e34ca05d-7673-435b-a6e6-0d775765472c" (UID: "e34ca05d-7673-435b-a6e6-0d775765472c"). InnerVolumeSpecName "kube-api-access-4d7fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.426945 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5223630b-272a-434b-83df-ef3915f58880-kube-api-access-8qxbx" (OuterVolumeSpecName: "kube-api-access-8qxbx") pod "5223630b-272a-434b-83df-ef3915f58880" (UID: "5223630b-272a-434b-83df-ef3915f58880"). InnerVolumeSpecName "kube-api-access-8qxbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.427706 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c34bab2-8d47-43e1-b367-8dd9b5c13c47-kube-api-access-zhgjd" (OuterVolumeSpecName: "kube-api-access-zhgjd") pod "6c34bab2-8d47-43e1-b367-8dd9b5c13c47" (UID: "6c34bab2-8d47-43e1-b367-8dd9b5c13c47"). InnerVolumeSpecName "kube-api-access-zhgjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.450750 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e34ca05d-7673-435b-a6e6-0d775765472c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e34ca05d-7673-435b-a6e6-0d775765472c" (UID: "e34ca05d-7673-435b-a6e6-0d775765472c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.480066 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-2w5tz"] Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.501773 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/923a45a5-bc05-4472-b647-b280bec7618b-operator-scripts\") pod \"923a45a5-bc05-4472-b647-b280bec7618b\" (UID: \"923a45a5-bc05-4472-b647-b280bec7618b\") " Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.501866 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hss74\" (UniqueName: \"kubernetes.io/projected/fc44c93e-8f06-48eb-a0a6-36a04e942702-kube-api-access-hss74\") pod \"fc44c93e-8f06-48eb-a0a6-36a04e942702\" (UID: \"fc44c93e-8f06-48eb-a0a6-36a04e942702\") " Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.501896 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tmxz\" (UniqueName: \"kubernetes.io/projected/923a45a5-bc05-4472-b647-b280bec7618b-kube-api-access-6tmxz\") pod \"923a45a5-bc05-4472-b647-b280bec7618b\" (UID: \"923a45a5-bc05-4472-b647-b280bec7618b\") " Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.502155 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc44c93e-8f06-48eb-a0a6-36a04e942702-operator-scripts\") pod \"fc44c93e-8f06-48eb-a0a6-36a04e942702\" (UID: \"fc44c93e-8f06-48eb-a0a6-36a04e942702\") " Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.502316 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f2aa84a-6c99-44d4-b3e4-11756080a16a-operator-scripts\") pod \"4f2aa84a-6c99-44d4-b3e4-11756080a16a\" (UID: \"4f2aa84a-6c99-44d4-b3e4-11756080a16a\") " Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.502468 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfc56\" (UniqueName: \"kubernetes.io/projected/4f2aa84a-6c99-44d4-b3e4-11756080a16a-kube-api-access-vfc56\") pod \"4f2aa84a-6c99-44d4-b3e4-11756080a16a\" (UID: \"4f2aa84a-6c99-44d4-b3e4-11756080a16a\") " Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.503074 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d7fs\" (UniqueName: \"kubernetes.io/projected/e34ca05d-7673-435b-a6e6-0d775765472c-kube-api-access-4d7fs\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.503088 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhgjd\" (UniqueName: \"kubernetes.io/projected/6c34bab2-8d47-43e1-b367-8dd9b5c13c47-kube-api-access-zhgjd\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.503098 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c34bab2-8d47-43e1-b367-8dd9b5c13c47-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.503107 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e34ca05d-7673-435b-a6e6-0d775765472c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.503116 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qxbx\" (UniqueName: \"kubernetes.io/projected/5223630b-272a-434b-83df-ef3915f58880-kube-api-access-8qxbx\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.503124 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5223630b-272a-434b-83df-ef3915f58880-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.504241 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/923a45a5-bc05-4472-b647-b280bec7618b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "923a45a5-bc05-4472-b647-b280bec7618b" (UID: "923a45a5-bc05-4472-b647-b280bec7618b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.507556 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc44c93e-8f06-48eb-a0a6-36a04e942702-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc44c93e-8f06-48eb-a0a6-36a04e942702" (UID: "fc44c93e-8f06-48eb-a0a6-36a04e942702"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.507769 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/923a45a5-bc05-4472-b647-b280bec7618b-kube-api-access-6tmxz" (OuterVolumeSpecName: "kube-api-access-6tmxz") pod "923a45a5-bc05-4472-b647-b280bec7618b" (UID: "923a45a5-bc05-4472-b647-b280bec7618b"). InnerVolumeSpecName "kube-api-access-6tmxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.508041 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f2aa84a-6c99-44d4-b3e4-11756080a16a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4f2aa84a-6c99-44d4-b3e4-11756080a16a" (UID: "4f2aa84a-6c99-44d4-b3e4-11756080a16a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.519404 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f2aa84a-6c99-44d4-b3e4-11756080a16a-kube-api-access-vfc56" (OuterVolumeSpecName: "kube-api-access-vfc56") pod "4f2aa84a-6c99-44d4-b3e4-11756080a16a" (UID: "4f2aa84a-6c99-44d4-b3e4-11756080a16a"). InnerVolumeSpecName "kube-api-access-vfc56". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.542016 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc44c93e-8f06-48eb-a0a6-36a04e942702-kube-api-access-hss74" (OuterVolumeSpecName: "kube-api-access-hss74") pod "fc44c93e-8f06-48eb-a0a6-36a04e942702" (UID: "fc44c93e-8f06-48eb-a0a6-36a04e942702"). InnerVolumeSpecName "kube-api-access-hss74". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.553706 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5b6d798f4-7gdft"] Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.628033 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc44c93e-8f06-48eb-a0a6-36a04e942702-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.630890 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f2aa84a-6c99-44d4-b3e4-11756080a16a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.630915 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfc56\" (UniqueName: \"kubernetes.io/projected/4f2aa84a-6c99-44d4-b3e4-11756080a16a-kube-api-access-vfc56\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.630926 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/923a45a5-bc05-4472-b647-b280bec7618b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.630937 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hss74\" (UniqueName: \"kubernetes.io/projected/fc44c93e-8f06-48eb-a0a6-36a04e942702-kube-api-access-hss74\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.630947 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tmxz\" (UniqueName: \"kubernetes.io/projected/923a45a5-bc05-4472-b647-b280bec7618b-kube-api-access-6tmxz\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.636304 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b58dbb476-qzjrl"] Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.660341 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0428bc2-7f90-4d19-86d4-ce0a69513a88","Type":"ContainerStarted","Data":"61d6e40fcc15acc948c9da96792c3892ecaafe5749a0935622eec3d6241a46c7"} Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.664798 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vfl9l" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.665365 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vfl9l" event={"ID":"6c34bab2-8d47-43e1-b367-8dd9b5c13c47","Type":"ContainerDied","Data":"f442bc2ae24f12fc9124fb3677778d2c82b63cf1e5276079d9db2dd9096d1330"} Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.665434 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f442bc2ae24f12fc9124fb3677778d2c82b63cf1e5276079d9db2dd9096d1330" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.681933 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-02a8-account-create-vkqwp" event={"ID":"4f2aa84a-6c99-44d4-b3e4-11756080a16a","Type":"ContainerDied","Data":"600dc815118417d7065c40a23da0a2ca88c4fa01e72e4595e26f3f01ddda7da0"} Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.681982 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="600dc815118417d7065c40a23da0a2ca88c4fa01e72e4595e26f3f01ddda7da0" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.682076 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-02a8-account-create-vkqwp" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.696372 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-abe0-account-create-7rbmj" event={"ID":"e34ca05d-7673-435b-a6e6-0d775765472c","Type":"ContainerDied","Data":"0a8deddf41465dc2d736d3df77a92f4ea6b9171310fa468b94f816dae4a57bc3"} Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.696427 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a8deddf41465dc2d736d3df77a92f4ea6b9171310fa468b94f816dae4a57bc3" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.696389 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-abe0-account-create-7rbmj" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.720723 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rn787" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.721261 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rn787" event={"ID":"923a45a5-bc05-4472-b647-b280bec7618b","Type":"ContainerDied","Data":"83eaa8ff307be17cae8e75b813fa048d62d6b6ef64c59edcaf87cd99fb15ca17"} Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.721309 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83eaa8ff307be17cae8e75b813fa048d62d6b6ef64c59edcaf87cd99fb15ca17" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.724024 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-244b-account-create-6jsxj" event={"ID":"5223630b-272a-434b-83df-ef3915f58880","Type":"ContainerDied","Data":"6fbaec892588058364384f88886efdf96297a3156a6ea6d9716a7d21ac2e4977"} Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.724057 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fbaec892588058364384f88886efdf96297a3156a6ea6d9716a7d21ac2e4977" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.724127 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-244b-account-create-6jsxj" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.746476 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fq7ll" event={"ID":"fc44c93e-8f06-48eb-a0a6-36a04e942702","Type":"ContainerDied","Data":"895056274e8f2964f2801c1f025e41191098e945669a5202c296d4248f1d3e38"} Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.746950 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="895056274e8f2964f2801c1f025e41191098e945669a5202c296d4248f1d3e38" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.746660 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fq7ll" Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.763125 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b6d798f4-7gdft" event={"ID":"59630821-44d7-4a76-873f-45ea27649b05","Type":"ContainerStarted","Data":"8c2a9bcb9e1947ed6b5b146290711b63149a58f7553b5ebfdadcf3b2e4de78c1"} Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.765536 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" event={"ID":"a189e45a-e15e-4a3b-b5de-3f0608b38f13","Type":"ContainerStarted","Data":"21141659156a58c7740e8b0b782ee7b44494bf611cc7cc59607415796eb74620"} Nov 24 11:38:36 crc kubenswrapper[4678]: I1124 11:38:36.806455 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6694596475-t2mb7"] Nov 24 11:38:37 crc kubenswrapper[4678]: I1124 11:38:37.779959 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" event={"ID":"2a2a6860-a011-4427-bd09-bd77fe038151","Type":"ContainerStarted","Data":"027c39d8078c5d93060356f30b6c6dde87060aa329c178063d9bebe5dedb5f32"} Nov 24 11:38:37 crc kubenswrapper[4678]: I1124 11:38:37.781797 4678 generic.go:334] "Generic (PLEG): container finished" podID="a189e45a-e15e-4a3b-b5de-3f0608b38f13" containerID="d2a40c8b39e319dea27948b1f78aad703eefcbc8c74d81eeae96e9c02492fadb" exitCode=0 Nov 24 11:38:37 crc kubenswrapper[4678]: I1124 11:38:37.781857 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" event={"ID":"a189e45a-e15e-4a3b-b5de-3f0608b38f13","Type":"ContainerDied","Data":"d2a40c8b39e319dea27948b1f78aad703eefcbc8c74d81eeae96e9c02492fadb"} Nov 24 11:38:37 crc kubenswrapper[4678]: I1124 11:38:37.784369 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6694596475-t2mb7" event={"ID":"e5736f93-57bc-4f43-a09e-7f417d8397b0","Type":"ContainerStarted","Data":"13fe472928e9e5f351e8c61450c60ab471c1e2c0f25dc5b02b9d7a75694f8f46"} Nov 24 11:38:37 crc kubenswrapper[4678]: I1124 11:38:37.788484 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b6d798f4-7gdft" event={"ID":"59630821-44d7-4a76-873f-45ea27649b05","Type":"ContainerStarted","Data":"fdaf4f069c8fb42c056351f3d37198802bbf6d8d0637b43b3bc2e12908ed58a7"} Nov 24 11:38:37 crc kubenswrapper[4678]: I1124 11:38:37.788713 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:38:37 crc kubenswrapper[4678]: I1124 11:38:37.882972 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5b6d798f4-7gdft" podStartSLOduration=3.882951714 podStartE2EDuration="3.882951714s" podCreationTimestamp="2025-11-24 11:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:37.837240792 +0000 UTC m=+1328.768300521" watchObservedRunningTime="2025-11-24 11:38:37.882951714 +0000 UTC m=+1328.814011353" Nov 24 11:38:38 crc kubenswrapper[4678]: I1124 11:38:38.804219 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" event={"ID":"a189e45a-e15e-4a3b-b5de-3f0608b38f13","Type":"ContainerStarted","Data":"e51f4ee7badcb61fb2fceeba9d1b3070f5a40793b470b1485a309d97a7e5f3bb"} Nov 24 11:38:38 crc kubenswrapper[4678]: I1124 11:38:38.805122 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:38 crc kubenswrapper[4678]: I1124 11:38:38.836263 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" podStartSLOduration=4.836240308 podStartE2EDuration="4.836240308s" podCreationTimestamp="2025-11-24 11:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:38.825870311 +0000 UTC m=+1329.756929960" watchObservedRunningTime="2025-11-24 11:38:38.836240308 +0000 UTC m=+1329.767299947" Nov 24 11:38:38 crc kubenswrapper[4678]: I1124 11:38:38.887723 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:38:38 crc kubenswrapper[4678]: I1124 11:38:38.888443 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7f345f7d-85e6-4995-9706-3189c846de37" containerName="glance-log" containerID="cri-o://2cff626c73567e135858ecf12294fccf650580f251dadb3b6203f5992376d5eb" gracePeriod=30 Nov 24 11:38:38 crc kubenswrapper[4678]: I1124 11:38:38.888563 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7f345f7d-85e6-4995-9706-3189c846de37" containerName="glance-httpd" containerID="cri-o://038dc0117a7d44bae9e834a44cae568412e96683234bc63309ab8b8b1ff68f0f" gracePeriod=30 Nov 24 11:38:39 crc kubenswrapper[4678]: I1124 11:38:39.829726 4678 generic.go:334] "Generic (PLEG): container finished" podID="7f345f7d-85e6-4995-9706-3189c846de37" containerID="2cff626c73567e135858ecf12294fccf650580f251dadb3b6203f5992376d5eb" exitCode=143 Nov 24 11:38:39 crc kubenswrapper[4678]: I1124 11:38:39.829972 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7f345f7d-85e6-4995-9706-3189c846de37","Type":"ContainerDied","Data":"2cff626c73567e135858ecf12294fccf650580f251dadb3b6203f5992376d5eb"} Nov 24 11:38:39 crc kubenswrapper[4678]: I1124 11:38:39.835508 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:39 crc kubenswrapper[4678]: I1124 11:38:39.840063 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6694596475-t2mb7" event={"ID":"e5736f93-57bc-4f43-a09e-7f417d8397b0","Type":"ContainerStarted","Data":"d43107ad88df0a46dc810d4ee3b02fb9d8a99cc87322126c87740a5262264228"} Nov 24 11:38:39 crc kubenswrapper[4678]: I1124 11:38:39.840502 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:39 crc kubenswrapper[4678]: I1124 11:38:39.865760 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" podStartSLOduration=3.155882054 podStartE2EDuration="5.86574199s" podCreationTimestamp="2025-11-24 11:38:34 +0000 UTC" firstStartedPulling="2025-11-24 11:38:36.660893506 +0000 UTC m=+1327.591953145" lastFinishedPulling="2025-11-24 11:38:39.370753432 +0000 UTC m=+1330.301813081" observedRunningTime="2025-11-24 11:38:39.855961188 +0000 UTC m=+1330.787020827" watchObservedRunningTime="2025-11-24 11:38:39.86574199 +0000 UTC m=+1330.796801629" Nov 24 11:38:39 crc kubenswrapper[4678]: I1124 11:38:39.878398 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6694596475-t2mb7" podStartSLOduration=3.434682453 podStartE2EDuration="5.878374337s" podCreationTimestamp="2025-11-24 11:38:34 +0000 UTC" firstStartedPulling="2025-11-24 11:38:36.927389007 +0000 UTC m=+1327.858448646" lastFinishedPulling="2025-11-24 11:38:39.371080891 +0000 UTC m=+1330.302140530" observedRunningTime="2025-11-24 11:38:39.878358957 +0000 UTC m=+1330.809418596" watchObservedRunningTime="2025-11-24 11:38:39.878374337 +0000 UTC m=+1330.809433976" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.268013 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.268555 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="765f2f85-0026-4941-94d4-8fb2f913d46d" containerName="glance-log" containerID="cri-o://b6dfef16739a1c0717ae6be60c05ad9d28b7f218dfeb9c89f59a25e32dbf0a56" gracePeriod=30 Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.268656 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="765f2f85-0026-4941-94d4-8fb2f913d46d" containerName="glance-httpd" containerID="cri-o://bbbb678a73d3318e72aa080a75cb86ab2adb15dde463ab361994ee932d813da7" gracePeriod=30 Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.854304 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" event={"ID":"2a2a6860-a011-4427-bd09-bd77fe038151","Type":"ContainerStarted","Data":"7439e6b86db188333a6f11c73e354ee8e879e98737245baae3f909667fc11936"} Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.859307 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0428bc2-7f90-4d19-86d4-ce0a69513a88","Type":"ContainerStarted","Data":"e3c0be170cac65a61c1e2b365570b2f5d83bc7ef4edb6ccea4baa5ec9782dd2d"} Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.860019 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.863431 4678 generic.go:334] "Generic (PLEG): container finished" podID="765f2f85-0026-4941-94d4-8fb2f913d46d" containerID="b6dfef16739a1c0717ae6be60c05ad9d28b7f218dfeb9c89f59a25e32dbf0a56" exitCode=143 Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.864395 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"765f2f85-0026-4941-94d4-8fb2f913d46d","Type":"ContainerDied","Data":"b6dfef16739a1c0717ae6be60c05ad9d28b7f218dfeb9c89f59a25e32dbf0a56"} Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.900856 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.898469227 podStartE2EDuration="8.90083384s" podCreationTimestamp="2025-11-24 11:38:32 +0000 UTC" firstStartedPulling="2025-11-24 11:38:33.369863169 +0000 UTC m=+1324.300922808" lastFinishedPulling="2025-11-24 11:38:39.372227782 +0000 UTC m=+1330.303287421" observedRunningTime="2025-11-24 11:38:40.891937593 +0000 UTC m=+1331.822997232" watchObservedRunningTime="2025-11-24 11:38:40.90083384 +0000 UTC m=+1331.831893479" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.958519 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4s8zw"] Nov 24 11:38:40 crc kubenswrapper[4678]: E1124 11:38:40.959000 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2aa84a-6c99-44d4-b3e4-11756080a16a" containerName="mariadb-account-create" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.959018 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2aa84a-6c99-44d4-b3e4-11756080a16a" containerName="mariadb-account-create" Nov 24 11:38:40 crc kubenswrapper[4678]: E1124 11:38:40.959037 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc44c93e-8f06-48eb-a0a6-36a04e942702" containerName="mariadb-database-create" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.959044 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc44c93e-8f06-48eb-a0a6-36a04e942702" containerName="mariadb-database-create" Nov 24 11:38:40 crc kubenswrapper[4678]: E1124 11:38:40.959053 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="923a45a5-bc05-4472-b647-b280bec7618b" containerName="mariadb-database-create" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.959059 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="923a45a5-bc05-4472-b647-b280bec7618b" containerName="mariadb-database-create" Nov 24 11:38:40 crc kubenswrapper[4678]: E1124 11:38:40.959071 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34ca05d-7673-435b-a6e6-0d775765472c" containerName="mariadb-account-create" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.959077 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34ca05d-7673-435b-a6e6-0d775765472c" containerName="mariadb-account-create" Nov 24 11:38:40 crc kubenswrapper[4678]: E1124 11:38:40.959090 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c34bab2-8d47-43e1-b367-8dd9b5c13c47" containerName="mariadb-database-create" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.959096 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c34bab2-8d47-43e1-b367-8dd9b5c13c47" containerName="mariadb-database-create" Nov 24 11:38:40 crc kubenswrapper[4678]: E1124 11:38:40.959116 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5223630b-272a-434b-83df-ef3915f58880" containerName="mariadb-account-create" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.959122 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5223630b-272a-434b-83df-ef3915f58880" containerName="mariadb-account-create" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.959342 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="923a45a5-bc05-4472-b647-b280bec7618b" containerName="mariadb-database-create" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.959360 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34ca05d-7673-435b-a6e6-0d775765472c" containerName="mariadb-account-create" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.959368 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc44c93e-8f06-48eb-a0a6-36a04e942702" containerName="mariadb-database-create" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.959382 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f2aa84a-6c99-44d4-b3e4-11756080a16a" containerName="mariadb-account-create" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.959397 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="5223630b-272a-434b-83df-ef3915f58880" containerName="mariadb-account-create" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.959408 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c34bab2-8d47-43e1-b367-8dd9b5c13c47" containerName="mariadb-database-create" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.960236 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4s8zw" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.970492 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.970710 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.971346 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pl2nf" Nov 24 11:38:40 crc kubenswrapper[4678]: I1124 11:38:40.981215 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4s8zw"] Nov 24 11:38:41 crc kubenswrapper[4678]: I1124 11:38:41.041618 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4s8zw\" (UID: \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\") " pod="openstack/nova-cell0-conductor-db-sync-4s8zw" Nov 24 11:38:41 crc kubenswrapper[4678]: I1124 11:38:41.041663 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n2m6\" (UniqueName: \"kubernetes.io/projected/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-kube-api-access-8n2m6\") pod \"nova-cell0-conductor-db-sync-4s8zw\" (UID: \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\") " pod="openstack/nova-cell0-conductor-db-sync-4s8zw" Nov 24 11:38:41 crc kubenswrapper[4678]: I1124 11:38:41.041713 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-scripts\") pod \"nova-cell0-conductor-db-sync-4s8zw\" (UID: \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\") " pod="openstack/nova-cell0-conductor-db-sync-4s8zw" Nov 24 11:38:41 crc kubenswrapper[4678]: I1124 11:38:41.041772 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-config-data\") pod \"nova-cell0-conductor-db-sync-4s8zw\" (UID: \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\") " pod="openstack/nova-cell0-conductor-db-sync-4s8zw" Nov 24 11:38:41 crc kubenswrapper[4678]: I1124 11:38:41.143495 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-config-data\") pod \"nova-cell0-conductor-db-sync-4s8zw\" (UID: \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\") " pod="openstack/nova-cell0-conductor-db-sync-4s8zw" Nov 24 11:38:41 crc kubenswrapper[4678]: I1124 11:38:41.143673 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4s8zw\" (UID: \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\") " pod="openstack/nova-cell0-conductor-db-sync-4s8zw" Nov 24 11:38:41 crc kubenswrapper[4678]: I1124 11:38:41.143813 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n2m6\" (UniqueName: \"kubernetes.io/projected/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-kube-api-access-8n2m6\") pod \"nova-cell0-conductor-db-sync-4s8zw\" (UID: \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\") " pod="openstack/nova-cell0-conductor-db-sync-4s8zw" Nov 24 11:38:41 crc kubenswrapper[4678]: I1124 11:38:41.143841 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-scripts\") pod \"nova-cell0-conductor-db-sync-4s8zw\" (UID: \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\") " pod="openstack/nova-cell0-conductor-db-sync-4s8zw" Nov 24 11:38:41 crc kubenswrapper[4678]: I1124 11:38:41.150004 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-scripts\") pod \"nova-cell0-conductor-db-sync-4s8zw\" (UID: \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\") " pod="openstack/nova-cell0-conductor-db-sync-4s8zw" Nov 24 11:38:41 crc kubenswrapper[4678]: I1124 11:38:41.151181 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-config-data\") pod \"nova-cell0-conductor-db-sync-4s8zw\" (UID: \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\") " pod="openstack/nova-cell0-conductor-db-sync-4s8zw" Nov 24 11:38:41 crc kubenswrapper[4678]: I1124 11:38:41.164245 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4s8zw\" (UID: \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\") " pod="openstack/nova-cell0-conductor-db-sync-4s8zw" Nov 24 11:38:41 crc kubenswrapper[4678]: I1124 11:38:41.183346 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n2m6\" (UniqueName: \"kubernetes.io/projected/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-kube-api-access-8n2m6\") pod \"nova-cell0-conductor-db-sync-4s8zw\" (UID: \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\") " pod="openstack/nova-cell0-conductor-db-sync-4s8zw" Nov 24 11:38:41 crc kubenswrapper[4678]: I1124 11:38:41.286833 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4s8zw" Nov 24 11:38:41 crc kubenswrapper[4678]: I1124 11:38:41.809983 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4s8zw"] Nov 24 11:38:41 crc kubenswrapper[4678]: I1124 11:38:41.886810 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4s8zw" event={"ID":"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6","Type":"ContainerStarted","Data":"aadb4af1810477e3a602834584c9fa43ca594c2a7a6848dfda70da3b301d0052"} Nov 24 11:38:42 crc kubenswrapper[4678]: I1124 11:38:42.024351 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="7f345f7d-85e6-4995-9706-3189c846de37" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.191:9292/healthcheck\": read tcp 10.217.0.2:53318->10.217.0.191:9292: read: connection reset by peer" Nov 24 11:38:42 crc kubenswrapper[4678]: I1124 11:38:42.024479 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="7f345f7d-85e6-4995-9706-3189c846de37" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.191:9292/healthcheck\": read tcp 10.217.0.2:53332->10.217.0.191:9292: read: connection reset by peer" Nov 24 11:38:42 crc kubenswrapper[4678]: I1124 11:38:42.807570 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:38:42 crc kubenswrapper[4678]: I1124 11:38:42.878998 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"7f345f7d-85e6-4995-9706-3189c846de37\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " Nov 24 11:38:42 crc kubenswrapper[4678]: I1124 11:38:42.879105 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt6tj\" (UniqueName: \"kubernetes.io/projected/7f345f7d-85e6-4995-9706-3189c846de37-kube-api-access-mt6tj\") pod \"7f345f7d-85e6-4995-9706-3189c846de37\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " Nov 24 11:38:42 crc kubenswrapper[4678]: I1124 11:38:42.879162 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-config-data\") pod \"7f345f7d-85e6-4995-9706-3189c846de37\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " Nov 24 11:38:42 crc kubenswrapper[4678]: I1124 11:38:42.879244 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f345f7d-85e6-4995-9706-3189c846de37-httpd-run\") pod \"7f345f7d-85e6-4995-9706-3189c846de37\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " Nov 24 11:38:42 crc kubenswrapper[4678]: I1124 11:38:42.879275 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-combined-ca-bundle\") pod \"7f345f7d-85e6-4995-9706-3189c846de37\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " Nov 24 11:38:42 crc kubenswrapper[4678]: I1124 11:38:42.879351 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-public-tls-certs\") pod \"7f345f7d-85e6-4995-9706-3189c846de37\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " Nov 24 11:38:42 crc kubenswrapper[4678]: I1124 11:38:42.879395 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-scripts\") pod \"7f345f7d-85e6-4995-9706-3189c846de37\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " Nov 24 11:38:42 crc kubenswrapper[4678]: I1124 11:38:42.879430 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f345f7d-85e6-4995-9706-3189c846de37-logs\") pod \"7f345f7d-85e6-4995-9706-3189c846de37\" (UID: \"7f345f7d-85e6-4995-9706-3189c846de37\") " Nov 24 11:38:42 crc kubenswrapper[4678]: I1124 11:38:42.880541 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f345f7d-85e6-4995-9706-3189c846de37-logs" (OuterVolumeSpecName: "logs") pod "7f345f7d-85e6-4995-9706-3189c846de37" (UID: "7f345f7d-85e6-4995-9706-3189c846de37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:38:42 crc kubenswrapper[4678]: I1124 11:38:42.880534 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f345f7d-85e6-4995-9706-3189c846de37-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7f345f7d-85e6-4995-9706-3189c846de37" (UID: "7f345f7d-85e6-4995-9706-3189c846de37"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:38:42 crc kubenswrapper[4678]: I1124 11:38:42.899853 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "7f345f7d-85e6-4995-9706-3189c846de37" (UID: "7f345f7d-85e6-4995-9706-3189c846de37"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 11:38:42 crc kubenswrapper[4678]: I1124 11:38:42.932087 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f345f7d-85e6-4995-9706-3189c846de37-kube-api-access-mt6tj" (OuterVolumeSpecName: "kube-api-access-mt6tj") pod "7f345f7d-85e6-4995-9706-3189c846de37" (UID: "7f345f7d-85e6-4995-9706-3189c846de37"). InnerVolumeSpecName "kube-api-access-mt6tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:42 crc kubenswrapper[4678]: I1124 11:38:42.936279 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-scripts" (OuterVolumeSpecName: "scripts") pod "7f345f7d-85e6-4995-9706-3189c846de37" (UID: "7f345f7d-85e6-4995-9706-3189c846de37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:42.991626 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt6tj\" (UniqueName: \"kubernetes.io/projected/7f345f7d-85e6-4995-9706-3189c846de37-kube-api-access-mt6tj\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:42.994028 4678 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f345f7d-85e6-4995-9706-3189c846de37-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:42.994056 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:42.994069 4678 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f345f7d-85e6-4995-9706-3189c846de37-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:42.994098 4678 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:42.994239 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f345f7d-85e6-4995-9706-3189c846de37" (UID: "7f345f7d-85e6-4995-9706-3189c846de37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.001464 4678 generic.go:334] "Generic (PLEG): container finished" podID="7f345f7d-85e6-4995-9706-3189c846de37" containerID="038dc0117a7d44bae9e834a44cae568412e96683234bc63309ab8b8b1ff68f0f" exitCode=0 Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.001520 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7f345f7d-85e6-4995-9706-3189c846de37","Type":"ContainerDied","Data":"038dc0117a7d44bae9e834a44cae568412e96683234bc63309ab8b8b1ff68f0f"} Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.001553 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7f345f7d-85e6-4995-9706-3189c846de37","Type":"ContainerDied","Data":"d1592d3c0704e915aa99caaa918a065fe376602b9337676499341f453395cf0f"} Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.001574 4678 scope.go:117] "RemoveContainer" containerID="038dc0117a7d44bae9e834a44cae568412e96683234bc63309ab8b8b1ff68f0f" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.001772 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.091644 4678 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.093806 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-config-data" (OuterVolumeSpecName: "config-data") pod "7f345f7d-85e6-4995-9706-3189c846de37" (UID: "7f345f7d-85e6-4995-9706-3189c846de37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.099803 4678 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.099846 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.099859 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.116306 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7f345f7d-85e6-4995-9706-3189c846de37" (UID: "7f345f7d-85e6-4995-9706-3189c846de37"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.204416 4678 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f345f7d-85e6-4995-9706-3189c846de37-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.244621 4678 scope.go:117] "RemoveContainer" containerID="2cff626c73567e135858ecf12294fccf650580f251dadb3b6203f5992376d5eb" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.275793 4678 scope.go:117] "RemoveContainer" containerID="038dc0117a7d44bae9e834a44cae568412e96683234bc63309ab8b8b1ff68f0f" Nov 24 11:38:43 crc kubenswrapper[4678]: E1124 11:38:43.276217 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"038dc0117a7d44bae9e834a44cae568412e96683234bc63309ab8b8b1ff68f0f\": container with ID starting with 038dc0117a7d44bae9e834a44cae568412e96683234bc63309ab8b8b1ff68f0f not found: ID does not exist" containerID="038dc0117a7d44bae9e834a44cae568412e96683234bc63309ab8b8b1ff68f0f" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.276261 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"038dc0117a7d44bae9e834a44cae568412e96683234bc63309ab8b8b1ff68f0f"} err="failed to get container status \"038dc0117a7d44bae9e834a44cae568412e96683234bc63309ab8b8b1ff68f0f\": rpc error: code = NotFound desc = could not find container \"038dc0117a7d44bae9e834a44cae568412e96683234bc63309ab8b8b1ff68f0f\": container with ID starting with 038dc0117a7d44bae9e834a44cae568412e96683234bc63309ab8b8b1ff68f0f not found: ID does not exist" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.276293 4678 scope.go:117] "RemoveContainer" containerID="2cff626c73567e135858ecf12294fccf650580f251dadb3b6203f5992376d5eb" Nov 24 11:38:43 crc kubenswrapper[4678]: E1124 11:38:43.276533 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cff626c73567e135858ecf12294fccf650580f251dadb3b6203f5992376d5eb\": container with ID starting with 2cff626c73567e135858ecf12294fccf650580f251dadb3b6203f5992376d5eb not found: ID does not exist" containerID="2cff626c73567e135858ecf12294fccf650580f251dadb3b6203f5992376d5eb" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.276555 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cff626c73567e135858ecf12294fccf650580f251dadb3b6203f5992376d5eb"} err="failed to get container status \"2cff626c73567e135858ecf12294fccf650580f251dadb3b6203f5992376d5eb\": rpc error: code = NotFound desc = could not find container \"2cff626c73567e135858ecf12294fccf650580f251dadb3b6203f5992376d5eb\": container with ID starting with 2cff626c73567e135858ecf12294fccf650580f251dadb3b6203f5992376d5eb not found: ID does not exist" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.342003 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.356235 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.369130 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:38:43 crc kubenswrapper[4678]: E1124 11:38:43.373608 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f345f7d-85e6-4995-9706-3189c846de37" containerName="glance-log" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.373732 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f345f7d-85e6-4995-9706-3189c846de37" containerName="glance-log" Nov 24 11:38:43 crc kubenswrapper[4678]: E1124 11:38:43.373818 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f345f7d-85e6-4995-9706-3189c846de37" containerName="glance-httpd" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.373874 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f345f7d-85e6-4995-9706-3189c846de37" containerName="glance-httpd" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.374215 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f345f7d-85e6-4995-9706-3189c846de37" containerName="glance-httpd" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.374295 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f345f7d-85e6-4995-9706-3189c846de37" containerName="glance-log" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.375922 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.379610 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.381130 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.415280 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.513299 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59b9920c-98be-4c2e-ba15-63d67e7f8a50-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.513753 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x78hl\" (UniqueName: \"kubernetes.io/projected/59b9920c-98be-4c2e-ba15-63d67e7f8a50-kube-api-access-x78hl\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.513918 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59b9920c-98be-4c2e-ba15-63d67e7f8a50-logs\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.514039 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b9920c-98be-4c2e-ba15-63d67e7f8a50-config-data\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.514181 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b9920c-98be-4c2e-ba15-63d67e7f8a50-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.514295 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59b9920c-98be-4c2e-ba15-63d67e7f8a50-scripts\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.513429 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.514437 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59b9920c-98be-4c2e-ba15-63d67e7f8a50-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.514567 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.514896 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerName="ceilometer-central-agent" containerID="cri-o://dcc4622309f8ff43d19b657569aebfc671d5ddfc3e3d6bc7c81a82ab4f0cb082" gracePeriod=30 Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.515050 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerName="proxy-httpd" containerID="cri-o://e3c0be170cac65a61c1e2b365570b2f5d83bc7ef4edb6ccea4baa5ec9782dd2d" gracePeriod=30 Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.515095 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerName="sg-core" containerID="cri-o://61d6e40fcc15acc948c9da96792c3892ecaafe5749a0935622eec3d6241a46c7" gracePeriod=30 Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.515140 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerName="ceilometer-notification-agent" containerID="cri-o://ca1ec9c3fe7014f24c57e8025a99396b6d68fd570e52c0ae3f711f7488ac0ba6" gracePeriod=30 Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.617777 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59b9920c-98be-4c2e-ba15-63d67e7f8a50-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.617854 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x78hl\" (UniqueName: \"kubernetes.io/projected/59b9920c-98be-4c2e-ba15-63d67e7f8a50-kube-api-access-x78hl\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.618434 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59b9920c-98be-4c2e-ba15-63d67e7f8a50-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.617900 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59b9920c-98be-4c2e-ba15-63d67e7f8a50-logs\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.619047 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b9920c-98be-4c2e-ba15-63d67e7f8a50-config-data\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.619389 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b9920c-98be-4c2e-ba15-63d67e7f8a50-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.619137 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59b9920c-98be-4c2e-ba15-63d67e7f8a50-logs\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.619544 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59b9920c-98be-4c2e-ba15-63d67e7f8a50-scripts\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.619909 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59b9920c-98be-4c2e-ba15-63d67e7f8a50-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.620021 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.620583 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.627174 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59b9920c-98be-4c2e-ba15-63d67e7f8a50-scripts\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.646024 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b9920c-98be-4c2e-ba15-63d67e7f8a50-config-data\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.646493 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x78hl\" (UniqueName: \"kubernetes.io/projected/59b9920c-98be-4c2e-ba15-63d67e7f8a50-kube-api-access-x78hl\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.646761 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b9920c-98be-4c2e-ba15-63d67e7f8a50-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.655317 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59b9920c-98be-4c2e-ba15-63d67e7f8a50-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.680189 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"59b9920c-98be-4c2e-ba15-63d67e7f8a50\") " pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.702948 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 11:38:43 crc kubenswrapper[4678]: I1124 11:38:43.916562 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f345f7d-85e6-4995-9706-3189c846de37" path="/var/lib/kubelet/pods/7f345f7d-85e6-4995-9706-3189c846de37/volumes" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.088770 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-868b8dc7c4-6g2qc"] Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.090290 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.134355 4678 generic.go:334] "Generic (PLEG): container finished" podID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerID="e3c0be170cac65a61c1e2b365570b2f5d83bc7ef4edb6ccea4baa5ec9782dd2d" exitCode=0 Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.134378 4678 generic.go:334] "Generic (PLEG): container finished" podID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerID="61d6e40fcc15acc948c9da96792c3892ecaafe5749a0935622eec3d6241a46c7" exitCode=2 Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.134410 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0428bc2-7f90-4d19-86d4-ce0a69513a88","Type":"ContainerDied","Data":"e3c0be170cac65a61c1e2b365570b2f5d83bc7ef4edb6ccea4baa5ec9782dd2d"} Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.134434 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0428bc2-7f90-4d19-86d4-ce0a69513a88","Type":"ContainerDied","Data":"61d6e40fcc15acc948c9da96792c3892ecaafe5749a0935622eec3d6241a46c7"} Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.137017 4678 generic.go:334] "Generic (PLEG): container finished" podID="765f2f85-0026-4941-94d4-8fb2f913d46d" containerID="bbbb678a73d3318e72aa080a75cb86ab2adb15dde463ab361994ee932d813da7" exitCode=0 Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.137197 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"765f2f85-0026-4941-94d4-8fb2f913d46d","Type":"ContainerDied","Data":"bbbb678a73d3318e72aa080a75cb86ab2adb15dde463ab361994ee932d813da7"} Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.149354 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-868b8dc7c4-6g2qc"] Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.163928 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-547bb9ff94-2m2k8"] Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.165449 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.181809 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-b8f4768f4-mzkhn"] Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.183466 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.195993 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-547bb9ff94-2m2k8"] Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.209855 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-b8f4768f4-mzkhn"] Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.258334 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-config-data-custom\") pod \"heat-cfnapi-547bb9ff94-2m2k8\" (UID: \"1f145b9b-b646-4e65-b709-367fd646614c\") " pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.258415 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-config-data\") pod \"heat-engine-868b8dc7c4-6g2qc\" (UID: \"dbe18e72-2389-4b2f-8819-29d70cdc5965\") " pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.258536 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4thc\" (UniqueName: \"kubernetes.io/projected/dbe18e72-2389-4b2f-8819-29d70cdc5965-kube-api-access-r4thc\") pod \"heat-engine-868b8dc7c4-6g2qc\" (UID: \"dbe18e72-2389-4b2f-8819-29d70cdc5965\") " pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.258577 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-combined-ca-bundle\") pod \"heat-engine-868b8dc7c4-6g2qc\" (UID: \"dbe18e72-2389-4b2f-8819-29d70cdc5965\") " pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.258738 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-config-data-custom\") pod \"heat-engine-868b8dc7c4-6g2qc\" (UID: \"dbe18e72-2389-4b2f-8819-29d70cdc5965\") " pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.258794 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-combined-ca-bundle\") pod \"heat-cfnapi-547bb9ff94-2m2k8\" (UID: \"1f145b9b-b646-4e65-b709-367fd646614c\") " pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.258859 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6fc4\" (UniqueName: \"kubernetes.io/projected/1f145b9b-b646-4e65-b709-367fd646614c-kube-api-access-x6fc4\") pod \"heat-cfnapi-547bb9ff94-2m2k8\" (UID: \"1f145b9b-b646-4e65-b709-367fd646614c\") " pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.258895 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-config-data\") pod \"heat-cfnapi-547bb9ff94-2m2k8\" (UID: \"1f145b9b-b646-4e65-b709-367fd646614c\") " pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.363305 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-combined-ca-bundle\") pod \"heat-api-b8f4768f4-mzkhn\" (UID: \"da9793e2-686b-4990-bebc-e221b3e14b9d\") " pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.363724 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-config-data-custom\") pod \"heat-engine-868b8dc7c4-6g2qc\" (UID: \"dbe18e72-2389-4b2f-8819-29d70cdc5965\") " pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.363760 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-combined-ca-bundle\") pod \"heat-cfnapi-547bb9ff94-2m2k8\" (UID: \"1f145b9b-b646-4e65-b709-367fd646614c\") " pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.363817 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6fc4\" (UniqueName: \"kubernetes.io/projected/1f145b9b-b646-4e65-b709-367fd646614c-kube-api-access-x6fc4\") pod \"heat-cfnapi-547bb9ff94-2m2k8\" (UID: \"1f145b9b-b646-4e65-b709-367fd646614c\") " pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.363872 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-config-data\") pod \"heat-cfnapi-547bb9ff94-2m2k8\" (UID: \"1f145b9b-b646-4e65-b709-367fd646614c\") " pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.364734 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-config-data\") pod \"heat-api-b8f4768f4-mzkhn\" (UID: \"da9793e2-686b-4990-bebc-e221b3e14b9d\") " pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.364828 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-config-data-custom\") pod \"heat-cfnapi-547bb9ff94-2m2k8\" (UID: \"1f145b9b-b646-4e65-b709-367fd646614c\") " pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.364875 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-config-data\") pod \"heat-engine-868b8dc7c4-6g2qc\" (UID: \"dbe18e72-2389-4b2f-8819-29d70cdc5965\") " pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.364894 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dmk2\" (UniqueName: \"kubernetes.io/projected/da9793e2-686b-4990-bebc-e221b3e14b9d-kube-api-access-6dmk2\") pod \"heat-api-b8f4768f4-mzkhn\" (UID: \"da9793e2-686b-4990-bebc-e221b3e14b9d\") " pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.364944 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-config-data-custom\") pod \"heat-api-b8f4768f4-mzkhn\" (UID: \"da9793e2-686b-4990-bebc-e221b3e14b9d\") " pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.364967 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4thc\" (UniqueName: \"kubernetes.io/projected/dbe18e72-2389-4b2f-8819-29d70cdc5965-kube-api-access-r4thc\") pod \"heat-engine-868b8dc7c4-6g2qc\" (UID: \"dbe18e72-2389-4b2f-8819-29d70cdc5965\") " pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.365003 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-combined-ca-bundle\") pod \"heat-engine-868b8dc7c4-6g2qc\" (UID: \"dbe18e72-2389-4b2f-8819-29d70cdc5965\") " pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.373857 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-combined-ca-bundle\") pod \"heat-engine-868b8dc7c4-6g2qc\" (UID: \"dbe18e72-2389-4b2f-8819-29d70cdc5965\") " pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.385155 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-combined-ca-bundle\") pod \"heat-cfnapi-547bb9ff94-2m2k8\" (UID: \"1f145b9b-b646-4e65-b709-367fd646614c\") " pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.385216 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-config-data-custom\") pod \"heat-cfnapi-547bb9ff94-2m2k8\" (UID: \"1f145b9b-b646-4e65-b709-367fd646614c\") " pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.385386 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-config-data\") pod \"heat-cfnapi-547bb9ff94-2m2k8\" (UID: \"1f145b9b-b646-4e65-b709-367fd646614c\") " pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.385873 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-config-data-custom\") pod \"heat-engine-868b8dc7c4-6g2qc\" (UID: \"dbe18e72-2389-4b2f-8819-29d70cdc5965\") " pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.386634 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-config-data\") pod \"heat-engine-868b8dc7c4-6g2qc\" (UID: \"dbe18e72-2389-4b2f-8819-29d70cdc5965\") " pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.388845 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4thc\" (UniqueName: \"kubernetes.io/projected/dbe18e72-2389-4b2f-8819-29d70cdc5965-kube-api-access-r4thc\") pod \"heat-engine-868b8dc7c4-6g2qc\" (UID: \"dbe18e72-2389-4b2f-8819-29d70cdc5965\") " pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.389511 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6fc4\" (UniqueName: \"kubernetes.io/projected/1f145b9b-b646-4e65-b709-367fd646614c-kube-api-access-x6fc4\") pod \"heat-cfnapi-547bb9ff94-2m2k8\" (UID: \"1f145b9b-b646-4e65-b709-367fd646614c\") " pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.443747 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.468113 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-combined-ca-bundle\") pod \"heat-api-b8f4768f4-mzkhn\" (UID: \"da9793e2-686b-4990-bebc-e221b3e14b9d\") " pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.468347 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-config-data\") pod \"heat-api-b8f4768f4-mzkhn\" (UID: \"da9793e2-686b-4990-bebc-e221b3e14b9d\") " pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.468780 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dmk2\" (UniqueName: \"kubernetes.io/projected/da9793e2-686b-4990-bebc-e221b3e14b9d-kube-api-access-6dmk2\") pod \"heat-api-b8f4768f4-mzkhn\" (UID: \"da9793e2-686b-4990-bebc-e221b3e14b9d\") " pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.469250 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-config-data-custom\") pod \"heat-api-b8f4768f4-mzkhn\" (UID: \"da9793e2-686b-4990-bebc-e221b3e14b9d\") " pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.472638 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-combined-ca-bundle\") pod \"heat-api-b8f4768f4-mzkhn\" (UID: \"da9793e2-686b-4990-bebc-e221b3e14b9d\") " pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.485566 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-config-data\") pod \"heat-api-b8f4768f4-mzkhn\" (UID: \"da9793e2-686b-4990-bebc-e221b3e14b9d\") " pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.486585 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-config-data-custom\") pod \"heat-api-b8f4768f4-mzkhn\" (UID: \"da9793e2-686b-4990-bebc-e221b3e14b9d\") " pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.489493 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.490182 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dmk2\" (UniqueName: \"kubernetes.io/projected/da9793e2-686b-4990-bebc-e221b3e14b9d-kube-api-access-6dmk2\") pod \"heat-api-b8f4768f4-mzkhn\" (UID: \"da9793e2-686b-4990-bebc-e221b3e14b9d\") " pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.501967 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.512587 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.581149 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.785263 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"765f2f85-0026-4941-94d4-8fb2f913d46d\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.785376 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-combined-ca-bundle\") pod \"765f2f85-0026-4941-94d4-8fb2f913d46d\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.785457 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-config-data\") pod \"765f2f85-0026-4941-94d4-8fb2f913d46d\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.785479 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/765f2f85-0026-4941-94d4-8fb2f913d46d-httpd-run\") pod \"765f2f85-0026-4941-94d4-8fb2f913d46d\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.785519 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5bmw\" (UniqueName: \"kubernetes.io/projected/765f2f85-0026-4941-94d4-8fb2f913d46d-kube-api-access-d5bmw\") pod \"765f2f85-0026-4941-94d4-8fb2f913d46d\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.785553 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-scripts\") pod \"765f2f85-0026-4941-94d4-8fb2f913d46d\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.785614 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/765f2f85-0026-4941-94d4-8fb2f913d46d-logs\") pod \"765f2f85-0026-4941-94d4-8fb2f913d46d\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.785664 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-internal-tls-certs\") pod \"765f2f85-0026-4941-94d4-8fb2f913d46d\" (UID: \"765f2f85-0026-4941-94d4-8fb2f913d46d\") " Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.787207 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/765f2f85-0026-4941-94d4-8fb2f913d46d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "765f2f85-0026-4941-94d4-8fb2f913d46d" (UID: "765f2f85-0026-4941-94d4-8fb2f913d46d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.789727 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/765f2f85-0026-4941-94d4-8fb2f913d46d-logs" (OuterVolumeSpecName: "logs") pod "765f2f85-0026-4941-94d4-8fb2f913d46d" (UID: "765f2f85-0026-4941-94d4-8fb2f913d46d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.811901 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-scripts" (OuterVolumeSpecName: "scripts") pod "765f2f85-0026-4941-94d4-8fb2f913d46d" (UID: "765f2f85-0026-4941-94d4-8fb2f913d46d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.812000 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "765f2f85-0026-4941-94d4-8fb2f913d46d" (UID: "765f2f85-0026-4941-94d4-8fb2f913d46d"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.814643 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/765f2f85-0026-4941-94d4-8fb2f913d46d-kube-api-access-d5bmw" (OuterVolumeSpecName: "kube-api-access-d5bmw") pod "765f2f85-0026-4941-94d4-8fb2f913d46d" (UID: "765f2f85-0026-4941-94d4-8fb2f913d46d"). InnerVolumeSpecName "kube-api-access-d5bmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.878395 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "765f2f85-0026-4941-94d4-8fb2f913d46d" (UID: "765f2f85-0026-4941-94d4-8fb2f913d46d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.888591 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5bmw\" (UniqueName: \"kubernetes.io/projected/765f2f85-0026-4941-94d4-8fb2f913d46d-kube-api-access-d5bmw\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.888630 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.888643 4678 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/765f2f85-0026-4941-94d4-8fb2f913d46d-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.888667 4678 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.890279 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.890297 4678 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/765f2f85-0026-4941-94d4-8fb2f913d46d-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.920393 4678 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.948256 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-config-data" (OuterVolumeSpecName: "config-data") pod "765f2f85-0026-4941-94d4-8fb2f913d46d" (UID: "765f2f85-0026-4941-94d4-8fb2f913d46d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.961749 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "765f2f85-0026-4941-94d4-8fb2f913d46d" (UID: "765f2f85-0026-4941-94d4-8fb2f913d46d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.993277 4678 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.993735 4678 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:44 crc kubenswrapper[4678]: I1124 11:38:44.993825 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/765f2f85-0026-4941-94d4-8fb2f913d46d-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.031845 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.142898 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-dxgvv"] Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.143157 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" podUID="eaf6d4b1-0dd0-4d17-b7f2-8503259f4248" containerName="dnsmasq-dns" containerID="cri-o://f53ec4a8665ae9bc3cdafc701373e934ca91b647874b4b1c228d135ffce87317" gracePeriod=10 Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.179434 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-868b8dc7c4-6g2qc"] Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.181730 4678 generic.go:334] "Generic (PLEG): container finished" podID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerID="ca1ec9c3fe7014f24c57e8025a99396b6d68fd570e52c0ae3f711f7488ac0ba6" exitCode=0 Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.181819 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0428bc2-7f90-4d19-86d4-ce0a69513a88","Type":"ContainerDied","Data":"ca1ec9c3fe7014f24c57e8025a99396b6d68fd570e52c0ae3f711f7488ac0ba6"} Nov 24 11:38:45 crc kubenswrapper[4678]: W1124 11:38:45.190914 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbe18e72_2389_4b2f_8819_29d70cdc5965.slice/crio-06b9d287e65dd4febb45c4cec9dbcb325e51c971b614dc2ad5480dbef8512674 WatchSource:0}: Error finding container 06b9d287e65dd4febb45c4cec9dbcb325e51c971b614dc2ad5480dbef8512674: Status 404 returned error can't find the container with id 06b9d287e65dd4febb45c4cec9dbcb325e51c971b614dc2ad5480dbef8512674 Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.191140 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.191984 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"765f2f85-0026-4941-94d4-8fb2f913d46d","Type":"ContainerDied","Data":"6e32eb25e1274aee30c86bde3902a5193bd85b7c6914bfe4d989da4d10c050d4"} Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.192040 4678 scope.go:117] "RemoveContainer" containerID="bbbb678a73d3318e72aa080a75cb86ab2adb15dde463ab361994ee932d813da7" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.213179 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59b9920c-98be-4c2e-ba15-63d67e7f8a50","Type":"ContainerStarted","Data":"97cd3b7b2bbef99dcb4819aab13c6a97240636708b5bbd2f582bb6d83d417d43"} Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.436053 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.460275 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.525782 4678 scope.go:117] "RemoveContainer" containerID="b6dfef16739a1c0717ae6be60c05ad9d28b7f218dfeb9c89f59a25e32dbf0a56" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.532841 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:38:45 crc kubenswrapper[4678]: E1124 11:38:45.534080 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765f2f85-0026-4941-94d4-8fb2f913d46d" containerName="glance-log" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.534174 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="765f2f85-0026-4941-94d4-8fb2f913d46d" containerName="glance-log" Nov 24 11:38:45 crc kubenswrapper[4678]: E1124 11:38:45.538971 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765f2f85-0026-4941-94d4-8fb2f913d46d" containerName="glance-httpd" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.539124 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="765f2f85-0026-4941-94d4-8fb2f913d46d" containerName="glance-httpd" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.553213 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="765f2f85-0026-4941-94d4-8fb2f913d46d" containerName="glance-log" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.553380 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="765f2f85-0026-4941-94d4-8fb2f913d46d" containerName="glance-httpd" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.565977 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: W1124 11:38:45.567610 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda9793e2_686b_4990_bebc_e221b3e14b9d.slice/crio-d31ca745ec8c687b29aa6f551d19cea519ec313ab89e69efb044b0d48b664345 WatchSource:0}: Error finding container d31ca745ec8c687b29aa6f551d19cea519ec313ab89e69efb044b0d48b664345: Status 404 returned error can't find the container with id d31ca745ec8c687b29aa6f551d19cea519ec313ab89e69efb044b0d48b664345 Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.569248 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.569381 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.569584 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.587229 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-b8f4768f4-mzkhn"] Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.656193 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.656753 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.656914 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfm66\" (UniqueName: \"kubernetes.io/projected/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-kube-api-access-bfm66\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.657140 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.657264 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.657365 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.657467 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-logs\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.657535 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.689246 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-547bb9ff94-2m2k8"] Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.759438 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.759491 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.759521 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.759545 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-logs\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.759559 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.759668 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.759710 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.759742 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfm66\" (UniqueName: \"kubernetes.io/projected/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-kube-api-access-bfm66\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.760982 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-logs\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.761328 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.763442 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.765452 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.777171 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.788394 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.788605 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.802529 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfm66\" (UniqueName: \"kubernetes.io/projected/1f7848f6-dff5-403f-b2bd-22d8a1e43b0c-kube-api-access-bfm66\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.934361 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="765f2f85-0026-4941-94d4-8fb2f913d46d" path="/var/lib/kubelet/pods/765f2f85-0026-4941-94d4-8fb2f913d46d/volumes" Nov 24 11:38:45 crc kubenswrapper[4678]: I1124 11:38:45.982908 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c\") " pod="openstack/glance-default-internal-api-0" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.194512 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.208394 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.283959 4678 generic.go:334] "Generic (PLEG): container finished" podID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerID="dcc4622309f8ff43d19b657569aebfc671d5ddfc3e3d6bc7c81a82ab4f0cb082" exitCode=0 Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.284236 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0428bc2-7f90-4d19-86d4-ce0a69513a88","Type":"ContainerDied","Data":"dcc4622309f8ff43d19b657569aebfc671d5ddfc3e3d6bc7c81a82ab4f0cb082"} Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.318554 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59b9920c-98be-4c2e-ba15-63d67e7f8a50","Type":"ContainerStarted","Data":"c19682527da16a8654b482db26f282ca175f5ab79b2b58ed2a2f7cfafd7d768d"} Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.328849 4678 generic.go:334] "Generic (PLEG): container finished" podID="eaf6d4b1-0dd0-4d17-b7f2-8503259f4248" containerID="f53ec4a8665ae9bc3cdafc701373e934ca91b647874b4b1c228d135ffce87317" exitCode=0 Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.328927 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" event={"ID":"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248","Type":"ContainerDied","Data":"f53ec4a8665ae9bc3cdafc701373e934ca91b647874b4b1c228d135ffce87317"} Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.328957 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" event={"ID":"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248","Type":"ContainerDied","Data":"0574f96e8898ad762d9b667f6349c5c55ca90c14d665fa19943c451258ae62ca"} Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.328975 4678 scope.go:117] "RemoveContainer" containerID="f53ec4a8665ae9bc3cdafc701373e934ca91b647874b4b1c228d135ffce87317" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.329095 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-dxgvv" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.351406 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b8f4768f4-mzkhn" event={"ID":"da9793e2-686b-4990-bebc-e221b3e14b9d","Type":"ContainerStarted","Data":"d31ca745ec8c687b29aa6f551d19cea519ec313ab89e69efb044b0d48b664345"} Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.374598 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" event={"ID":"1f145b9b-b646-4e65-b709-367fd646614c","Type":"ContainerStarted","Data":"591ae31281621c4d80dce12b7d6c34e7b526e2c794f9cbdc11014adf412b7b7c"} Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.382754 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-ovsdbserver-nb\") pod \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.382823 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmtj2\" (UniqueName: \"kubernetes.io/projected/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-kube-api-access-zmtj2\") pod \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.382896 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-dns-swift-storage-0\") pod \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.382960 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-ovsdbserver-sb\") pod \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.383163 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-config\") pod \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.383214 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-dns-svc\") pod \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\" (UID: \"eaf6d4b1-0dd0-4d17-b7f2-8503259f4248\") " Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.388883 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-868b8dc7c4-6g2qc" event={"ID":"dbe18e72-2389-4b2f-8819-29d70cdc5965","Type":"ContainerStarted","Data":"afa143c9fa46f973a488475e55fb20fe23a9c38f1ccfd6d3137a6879cd7ea6e9"} Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.388926 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-868b8dc7c4-6g2qc" event={"ID":"dbe18e72-2389-4b2f-8819-29d70cdc5965","Type":"ContainerStarted","Data":"06b9d287e65dd4febb45c4cec9dbcb325e51c971b614dc2ad5480dbef8512674"} Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.389961 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.392882 4678 scope.go:117] "RemoveContainer" containerID="c7c2cb7000eb5d1e644d749f9dbf3c374ed0ddbf1c2666f58f6e705da94aebf1" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.421420 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-kube-api-access-zmtj2" (OuterVolumeSpecName: "kube-api-access-zmtj2") pod "eaf6d4b1-0dd0-4d17-b7f2-8503259f4248" (UID: "eaf6d4b1-0dd0-4d17-b7f2-8503259f4248"). InnerVolumeSpecName "kube-api-access-zmtj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.422910 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-868b8dc7c4-6g2qc" podStartSLOduration=2.422887576 podStartE2EDuration="2.422887576s" podCreationTimestamp="2025-11-24 11:38:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:46.41926348 +0000 UTC m=+1337.350323129" watchObservedRunningTime="2025-11-24 11:38:46.422887576 +0000 UTC m=+1337.353947225" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.488979 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmtj2\" (UniqueName: \"kubernetes.io/projected/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-kube-api-access-zmtj2\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.590080 4678 scope.go:117] "RemoveContainer" containerID="f53ec4a8665ae9bc3cdafc701373e934ca91b647874b4b1c228d135ffce87317" Nov 24 11:38:46 crc kubenswrapper[4678]: E1124 11:38:46.602973 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f53ec4a8665ae9bc3cdafc701373e934ca91b647874b4b1c228d135ffce87317\": container with ID starting with f53ec4a8665ae9bc3cdafc701373e934ca91b647874b4b1c228d135ffce87317 not found: ID does not exist" containerID="f53ec4a8665ae9bc3cdafc701373e934ca91b647874b4b1c228d135ffce87317" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.603027 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f53ec4a8665ae9bc3cdafc701373e934ca91b647874b4b1c228d135ffce87317"} err="failed to get container status \"f53ec4a8665ae9bc3cdafc701373e934ca91b647874b4b1c228d135ffce87317\": rpc error: code = NotFound desc = could not find container \"f53ec4a8665ae9bc3cdafc701373e934ca91b647874b4b1c228d135ffce87317\": container with ID starting with f53ec4a8665ae9bc3cdafc701373e934ca91b647874b4b1c228d135ffce87317 not found: ID does not exist" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.603056 4678 scope.go:117] "RemoveContainer" containerID="c7c2cb7000eb5d1e644d749f9dbf3c374ed0ddbf1c2666f58f6e705da94aebf1" Nov 24 11:38:46 crc kubenswrapper[4678]: E1124 11:38:46.603577 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7c2cb7000eb5d1e644d749f9dbf3c374ed0ddbf1c2666f58f6e705da94aebf1\": container with ID starting with c7c2cb7000eb5d1e644d749f9dbf3c374ed0ddbf1c2666f58f6e705da94aebf1 not found: ID does not exist" containerID="c7c2cb7000eb5d1e644d749f9dbf3c374ed0ddbf1c2666f58f6e705da94aebf1" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.603692 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7c2cb7000eb5d1e644d749f9dbf3c374ed0ddbf1c2666f58f6e705da94aebf1"} err="failed to get container status \"c7c2cb7000eb5d1e644d749f9dbf3c374ed0ddbf1c2666f58f6e705da94aebf1\": rpc error: code = NotFound desc = could not find container \"c7c2cb7000eb5d1e644d749f9dbf3c374ed0ddbf1c2666f58f6e705da94aebf1\": container with ID starting with c7c2cb7000eb5d1e644d749f9dbf3c374ed0ddbf1c2666f58f6e705da94aebf1 not found: ID does not exist" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.674310 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.680560 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-config" (OuterVolumeSpecName: "config") pod "eaf6d4b1-0dd0-4d17-b7f2-8503259f4248" (UID: "eaf6d4b1-0dd0-4d17-b7f2-8503259f4248"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.687625 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eaf6d4b1-0dd0-4d17-b7f2-8503259f4248" (UID: "eaf6d4b1-0dd0-4d17-b7f2-8503259f4248"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.701839 4678 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.701865 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.793244 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eaf6d4b1-0dd0-4d17-b7f2-8503259f4248" (UID: "eaf6d4b1-0dd0-4d17-b7f2-8503259f4248"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.804165 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-config-data\") pod \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.804217 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-combined-ca-bundle\") pod \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.804290 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-scripts\") pod \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.804341 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk8nk\" (UniqueName: \"kubernetes.io/projected/a0428bc2-7f90-4d19-86d4-ce0a69513a88-kube-api-access-hk8nk\") pod \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.804403 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-sg-core-conf-yaml\") pod \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.804491 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0428bc2-7f90-4d19-86d4-ce0a69513a88-run-httpd\") pod \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.804520 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0428bc2-7f90-4d19-86d4-ce0a69513a88-log-httpd\") pod \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.805173 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.805747 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0428bc2-7f90-4d19-86d4-ce0a69513a88-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a0428bc2-7f90-4d19-86d4-ce0a69513a88" (UID: "a0428bc2-7f90-4d19-86d4-ce0a69513a88"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.809304 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eaf6d4b1-0dd0-4d17-b7f2-8503259f4248" (UID: "eaf6d4b1-0dd0-4d17-b7f2-8503259f4248"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.809952 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0428bc2-7f90-4d19-86d4-ce0a69513a88-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a0428bc2-7f90-4d19-86d4-ce0a69513a88" (UID: "a0428bc2-7f90-4d19-86d4-ce0a69513a88"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.814834 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-scripts" (OuterVolumeSpecName: "scripts") pod "a0428bc2-7f90-4d19-86d4-ce0a69513a88" (UID: "a0428bc2-7f90-4d19-86d4-ce0a69513a88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.827889 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0428bc2-7f90-4d19-86d4-ce0a69513a88-kube-api-access-hk8nk" (OuterVolumeSpecName: "kube-api-access-hk8nk") pod "a0428bc2-7f90-4d19-86d4-ce0a69513a88" (UID: "a0428bc2-7f90-4d19-86d4-ce0a69513a88"). InnerVolumeSpecName "kube-api-access-hk8nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.828714 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eaf6d4b1-0dd0-4d17-b7f2-8503259f4248" (UID: "eaf6d4b1-0dd0-4d17-b7f2-8503259f4248"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.875867 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a0428bc2-7f90-4d19-86d4-ce0a69513a88" (UID: "a0428bc2-7f90-4d19-86d4-ce0a69513a88"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.918463 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.929123 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk8nk\" (UniqueName: \"kubernetes.io/projected/a0428bc2-7f90-4d19-86d4-ce0a69513a88-kube-api-access-hk8nk\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.929200 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.929214 4678 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.929226 4678 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0428bc2-7f90-4d19-86d4-ce0a69513a88-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.929238 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.929246 4678 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0428bc2-7f90-4d19-86d4-ce0a69513a88-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:46 crc kubenswrapper[4678]: I1124 11:38:46.983456 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 11:38:47 crc kubenswrapper[4678]: W1124 11:38:47.017892 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f7848f6_dff5_403f_b2bd_22d8a1e43b0c.slice/crio-9e05509292ce4274913f0d3fbb098fbe48c94901e163877cf6c7b6f740e597fc WatchSource:0}: Error finding container 9e05509292ce4274913f0d3fbb098fbe48c94901e163877cf6c7b6f740e597fc: Status 404 returned error can't find the container with id 9e05509292ce4274913f0d3fbb098fbe48c94901e163877cf6c7b6f740e597fc Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.030952 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0428bc2-7f90-4d19-86d4-ce0a69513a88" (UID: "a0428bc2-7f90-4d19-86d4-ce0a69513a88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.032271 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-combined-ca-bundle\") pod \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\" (UID: \"a0428bc2-7f90-4d19-86d4-ce0a69513a88\") " Nov 24 11:38:47 crc kubenswrapper[4678]: W1124 11:38:47.032954 4678 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a0428bc2-7f90-4d19-86d4-ce0a69513a88/volumes/kubernetes.io~secret/combined-ca-bundle Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.033019 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0428bc2-7f90-4d19-86d4-ce0a69513a88" (UID: "a0428bc2-7f90-4d19-86d4-ce0a69513a88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.037683 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.093501 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-config-data" (OuterVolumeSpecName: "config-data") pod "a0428bc2-7f90-4d19-86d4-ce0a69513a88" (UID: "a0428bc2-7f90-4d19-86d4-ce0a69513a88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.140448 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0428bc2-7f90-4d19-86d4-ce0a69513a88-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.336735 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-dxgvv"] Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.355048 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-dxgvv"] Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.449530 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0428bc2-7f90-4d19-86d4-ce0a69513a88","Type":"ContainerDied","Data":"153c71a14becd316428da11cbcba3b075c365e25dbf2af526ad022ad285b35f2"} Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.449588 4678 scope.go:117] "RemoveContainer" containerID="e3c0be170cac65a61c1e2b365570b2f5d83bc7ef4edb6ccea4baa5ec9782dd2d" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.449739 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.481502 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6694596475-t2mb7"] Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.481748 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6694596475-t2mb7" podUID="e5736f93-57bc-4f43-a09e-7f417d8397b0" containerName="heat-api" containerID="cri-o://d43107ad88df0a46dc810d4ee3b02fb9d8a99cc87322126c87740a5262264228" gracePeriod=60 Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.502752 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6694596475-t2mb7" podUID="e5736f93-57bc-4f43-a09e-7f417d8397b0" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.217:8004/healthcheck\": EOF" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.502810 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-api-6694596475-t2mb7" podUID="e5736f93-57bc-4f43-a09e-7f417d8397b0" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.217:8004/healthcheck\": EOF" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.508367 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6694596475-t2mb7" podUID="e5736f93-57bc-4f43-a09e-7f417d8397b0" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.217:8004/healthcheck\": EOF" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.514658 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c","Type":"ContainerStarted","Data":"9e05509292ce4274913f0d3fbb098fbe48c94901e163877cf6c7b6f740e597fc"} Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.525693 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6d5bd86fdc-h8dll"] Nov 24 11:38:47 crc kubenswrapper[4678]: E1124 11:38:47.526176 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf6d4b1-0dd0-4d17-b7f2-8503259f4248" containerName="init" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.526193 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf6d4b1-0dd0-4d17-b7f2-8503259f4248" containerName="init" Nov 24 11:38:47 crc kubenswrapper[4678]: E1124 11:38:47.526209 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerName="proxy-httpd" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.526216 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerName="proxy-httpd" Nov 24 11:38:47 crc kubenswrapper[4678]: E1124 11:38:47.526229 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf6d4b1-0dd0-4d17-b7f2-8503259f4248" containerName="dnsmasq-dns" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.526235 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf6d4b1-0dd0-4d17-b7f2-8503259f4248" containerName="dnsmasq-dns" Nov 24 11:38:47 crc kubenswrapper[4678]: E1124 11:38:47.526276 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerName="ceilometer-central-agent" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.526281 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerName="ceilometer-central-agent" Nov 24 11:38:47 crc kubenswrapper[4678]: E1124 11:38:47.526290 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerName="ceilometer-notification-agent" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.526296 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerName="ceilometer-notification-agent" Nov 24 11:38:47 crc kubenswrapper[4678]: E1124 11:38:47.526311 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerName="sg-core" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.526317 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerName="sg-core" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.526521 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerName="ceilometer-central-agent" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.526532 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerName="sg-core" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.526544 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerName="ceilometer-notification-agent" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.526556 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaf6d4b1-0dd0-4d17-b7f2-8503259f4248" containerName="dnsmasq-dns" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.526584 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" containerName="proxy-httpd" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.527340 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.531801 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.532002 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.561984 4678 generic.go:334] "Generic (PLEG): container finished" podID="da9793e2-686b-4990-bebc-e221b3e14b9d" containerID="fde665cf004898f968b04b9b853ee8cd95cdb48d00850d5d307bd52e4cf2ba6e" exitCode=1 Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.562075 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b8f4768f4-mzkhn" event={"ID":"da9793e2-686b-4990-bebc-e221b3e14b9d","Type":"ContainerDied","Data":"fde665cf004898f968b04b9b853ee8cd95cdb48d00850d5d307bd52e4cf2ba6e"} Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.562782 4678 scope.go:117] "RemoveContainer" containerID="fde665cf004898f968b04b9b853ee8cd95cdb48d00850d5d307bd52e4cf2ba6e" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.578828 4678 scope.go:117] "RemoveContainer" containerID="61d6e40fcc15acc948c9da96792c3892ecaafe5749a0935622eec3d6241a46c7" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.582929 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6b58dbb476-qzjrl"] Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.583150 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" podUID="2a2a6860-a011-4427-bd09-bd77fe038151" containerName="heat-cfnapi" containerID="cri-o://7439e6b86db188333a6f11c73e354ee8e879e98737245baae3f909667fc11936" gracePeriod=60 Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.590618 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" podUID="2a2a6860-a011-4427-bd09-bd77fe038151" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.216:8000/healthcheck\": EOF" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.594506 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" podUID="2a2a6860-a011-4427-bd09-bd77fe038151" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.216:8000/healthcheck\": EOF" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.600974 4678 generic.go:334] "Generic (PLEG): container finished" podID="1f145b9b-b646-4e65-b709-367fd646614c" containerID="64df3b8a49f1e166847ad845a39bb5bb9539ab94ccfe825e39dd7a12e9464c85" exitCode=1 Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.602008 4678 scope.go:117] "RemoveContainer" containerID="64df3b8a49f1e166847ad845a39bb5bb9539ab94ccfe825e39dd7a12e9464c85" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.602234 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" event={"ID":"1f145b9b-b646-4e65-b709-367fd646614c","Type":"ContainerDied","Data":"64df3b8a49f1e166847ad845a39bb5bb9539ab94ccfe825e39dd7a12e9464c85"} Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.608299 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d5bd86fdc-h8dll"] Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.623892 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-58b5bdcfc5-zwlfb"] Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.625651 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.643619 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.643825 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.655436 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbdtt\" (UniqueName: \"kubernetes.io/projected/a72aa4c3-72f4-473d-bf8f-a16b6d456add-kube-api-access-sbdtt\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.655514 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-internal-tls-certs\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.655535 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-combined-ca-bundle\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.655629 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-config-data\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.655716 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-public-tls-certs\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.655753 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-config-data-custom\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.666884 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-58b5bdcfc5-zwlfb"] Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.678758 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.737022 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.759360 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbdtt\" (UniqueName: \"kubernetes.io/projected/a72aa4c3-72f4-473d-bf8f-a16b6d456add-kube-api-access-sbdtt\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.759893 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-public-tls-certs\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.760721 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-config-data\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.760798 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-internal-tls-certs\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.760831 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-combined-ca-bundle\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.760883 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-config-data-custom\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.761129 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-config-data\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.761240 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-combined-ca-bundle\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.761320 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-public-tls-certs\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.761349 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-config-data-custom\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.761372 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phgw6\" (UniqueName: \"kubernetes.io/projected/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-kube-api-access-phgw6\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.761419 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-internal-tls-certs\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.790821 4678 scope.go:117] "RemoveContainer" containerID="ca1ec9c3fe7014f24c57e8025a99396b6d68fd570e52c0ae3f711f7488ac0ba6" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.805912 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbdtt\" (UniqueName: \"kubernetes.io/projected/a72aa4c3-72f4-473d-bf8f-a16b6d456add-kube-api-access-sbdtt\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.813834 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-config-data-custom\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.815205 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-public-tls-certs\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.818035 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-internal-tls-certs\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.818735 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-combined-ca-bundle\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.821048 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-config-data\") pod \"heat-api-6d5bd86fdc-h8dll\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.850561 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.855248 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.860031 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.860278 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.862956 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-combined-ca-bundle\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.863032 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phgw6\" (UniqueName: \"kubernetes.io/projected/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-kube-api-access-phgw6\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.863058 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-internal-tls-certs\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.863113 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-public-tls-certs\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.863145 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-config-data\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.863176 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-config-data-custom\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.871237 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-combined-ca-bundle\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.872617 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-config-data\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.875522 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-config-data-custom\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.875834 4678 scope.go:117] "RemoveContainer" containerID="dcc4622309f8ff43d19b657569aebfc671d5ddfc3e3d6bc7c81a82ab4f0cb082" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.885497 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-internal-tls-certs\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.886893 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-public-tls-certs\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.890327 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phgw6\" (UniqueName: \"kubernetes.io/projected/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-kube-api-access-phgw6\") pod \"heat-cfnapi-58b5bdcfc5-zwlfb\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.903383 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.969311 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.971299 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.972061 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-log-httpd\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.972043 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0428bc2-7f90-4d19-86d4-ce0a69513a88" path="/var/lib/kubelet/pods/a0428bc2-7f90-4d19-86d4-ce0a69513a88/volumes" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.972131 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-config-data\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.972543 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-run-httpd\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.973663 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5fgl\" (UniqueName: \"kubernetes.io/projected/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-kube-api-access-n5fgl\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.973697 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaf6d4b1-0dd0-4d17-b7f2-8503259f4248" path="/var/lib/kubelet/pods/eaf6d4b1-0dd0-4d17-b7f2-8503259f4248/volumes" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.973811 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.973860 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-scripts\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:47 crc kubenswrapper[4678]: I1124 11:38:47.974651 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.076189 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5fgl\" (UniqueName: \"kubernetes.io/projected/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-kube-api-access-n5fgl\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.076548 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.076576 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-scripts\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.076597 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.076650 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-log-httpd\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.076689 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-config-data\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.076742 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-run-httpd\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.077212 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-run-httpd\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.088426 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.095908 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-log-httpd\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.115583 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.119981 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-config-data\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.122174 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5fgl\" (UniqueName: \"kubernetes.io/projected/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-kube-api-access-n5fgl\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.129657 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-scripts\") pod \"ceilometer-0\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " pod="openstack/ceilometer-0" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.310487 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.641373 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59b9920c-98be-4c2e-ba15-63d67e7f8a50","Type":"ContainerStarted","Data":"a427d6449ef4c6d4ab30e5f3720c4f215a4b8db5c51130ae99c68ea736dc073e"} Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.648519 4678 generic.go:334] "Generic (PLEG): container finished" podID="da9793e2-686b-4990-bebc-e221b3e14b9d" containerID="620def8d7f6b39bb95595b1f08cf19270013b54878bdaa319fbebbcdcda25bae" exitCode=1 Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.648627 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b8f4768f4-mzkhn" event={"ID":"da9793e2-686b-4990-bebc-e221b3e14b9d","Type":"ContainerDied","Data":"620def8d7f6b39bb95595b1f08cf19270013b54878bdaa319fbebbcdcda25bae"} Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.648709 4678 scope.go:117] "RemoveContainer" containerID="fde665cf004898f968b04b9b853ee8cd95cdb48d00850d5d307bd52e4cf2ba6e" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.650812 4678 scope.go:117] "RemoveContainer" containerID="620def8d7f6b39bb95595b1f08cf19270013b54878bdaa319fbebbcdcda25bae" Nov 24 11:38:48 crc kubenswrapper[4678]: E1124 11:38:48.651317 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-b8f4768f4-mzkhn_openstack(da9793e2-686b-4990-bebc-e221b3e14b9d)\"" pod="openstack/heat-api-b8f4768f4-mzkhn" podUID="da9793e2-686b-4990-bebc-e221b3e14b9d" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.654792 4678 generic.go:334] "Generic (PLEG): container finished" podID="1f145b9b-b646-4e65-b709-367fd646614c" containerID="689d5bd851ad3e08cf736d8d8037fb1133c8215234669e7b5448947c9ef8bbd8" exitCode=1 Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.655071 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" event={"ID":"1f145b9b-b646-4e65-b709-367fd646614c","Type":"ContainerDied","Data":"689d5bd851ad3e08cf736d8d8037fb1133c8215234669e7b5448947c9ef8bbd8"} Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.655791 4678 scope.go:117] "RemoveContainer" containerID="689d5bd851ad3e08cf736d8d8037fb1133c8215234669e7b5448947c9ef8bbd8" Nov 24 11:38:48 crc kubenswrapper[4678]: E1124 11:38:48.656509 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-547bb9ff94-2m2k8_openstack(1f145b9b-b646-4e65-b709-367fd646614c)\"" pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" podUID="1f145b9b-b646-4e65-b709-367fd646614c" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.665175 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c","Type":"ContainerStarted","Data":"8863bd0313c026af8796ddd3c74ce360c7d3fc050d171d85eb6d30aa06a8f1f6"} Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.698732 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.698702993 podStartE2EDuration="5.698702993s" podCreationTimestamp="2025-11-24 11:38:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:48.670216721 +0000 UTC m=+1339.601276360" watchObservedRunningTime="2025-11-24 11:38:48.698702993 +0000 UTC m=+1339.629762632" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.770648 4678 scope.go:117] "RemoveContainer" containerID="64df3b8a49f1e166847ad845a39bb5bb9539ab94ccfe825e39dd7a12e9464c85" Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.776458 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d5bd86fdc-h8dll"] Nov 24 11:38:48 crc kubenswrapper[4678]: I1124 11:38:48.957100 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-58b5bdcfc5-zwlfb"] Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.102617 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:38:49 crc kubenswrapper[4678]: E1124 11:38:49.134947 4678 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0cecbb6_5a82_4c3e_9bd5_94db58a9f06b.slice/crio-8427c8e34c7118ad124f89af17ac93c9aebeece65d54a93eca9857c9221ae9b7\": RecentStats: unable to find data in memory cache]" Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.490430 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.490877 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.513104 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.513153 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.678042 4678 scope.go:117] "RemoveContainer" containerID="689d5bd851ad3e08cf736d8d8037fb1133c8215234669e7b5448947c9ef8bbd8" Nov 24 11:38:49 crc kubenswrapper[4678]: E1124 11:38:49.678439 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-547bb9ff94-2m2k8_openstack(1f145b9b-b646-4e65-b709-367fd646614c)\"" pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" podUID="1f145b9b-b646-4e65-b709-367fd646614c" Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.686681 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f7848f6-dff5-403f-b2bd-22d8a1e43b0c","Type":"ContainerStarted","Data":"2b12e4e25d9a312f5464a8d8403f65766423742f5041eff0517fe7490f202cce"} Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.690816 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d5bd86fdc-h8dll" event={"ID":"a72aa4c3-72f4-473d-bf8f-a16b6d456add","Type":"ContainerStarted","Data":"70127dabb05c80deebbf61255958491260ab9ab73ce1030ea5f8b33914502887"} Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.690893 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d5bd86fdc-h8dll" event={"ID":"a72aa4c3-72f4-473d-bf8f-a16b6d456add","Type":"ContainerStarted","Data":"7c60c88ee4f2d2ceb8f21b875ef2a3fdb9a322888207af12e28508da0484d79d"} Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.692158 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.694051 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b","Type":"ContainerStarted","Data":"8427c8e34c7118ad124f89af17ac93c9aebeece65d54a93eca9857c9221ae9b7"} Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.702479 4678 scope.go:117] "RemoveContainer" containerID="620def8d7f6b39bb95595b1f08cf19270013b54878bdaa319fbebbcdcda25bae" Nov 24 11:38:49 crc kubenswrapper[4678]: E1124 11:38:49.702819 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-b8f4768f4-mzkhn_openstack(da9793e2-686b-4990-bebc-e221b3e14b9d)\"" pod="openstack/heat-api-b8f4768f4-mzkhn" podUID="da9793e2-686b-4990-bebc-e221b3e14b9d" Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.704850 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" event={"ID":"ab7fd19c-25f8-400e-b98a-e5dd65e113ac","Type":"ContainerStarted","Data":"4a8413214958702d07e05b1d3adbf724e5f7fa558cb7975892d3c43f6cacce03"} Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.704903 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" event={"ID":"ab7fd19c-25f8-400e-b98a-e5dd65e113ac","Type":"ContainerStarted","Data":"172e6c8221bfb41715afe619c134cb2fe9f7032ddc78cd8c802a02e8c21d87cb"} Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.705146 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.730848 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.730823115 podStartE2EDuration="4.730823115s" podCreationTimestamp="2025-11-24 11:38:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:49.72503866 +0000 UTC m=+1340.656098299" watchObservedRunningTime="2025-11-24 11:38:49.730823115 +0000 UTC m=+1340.661882754" Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.777757 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6d5bd86fdc-h8dll" podStartSLOduration=2.769814187 podStartE2EDuration="2.769814187s" podCreationTimestamp="2025-11-24 11:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:49.754734534 +0000 UTC m=+1340.685794173" watchObservedRunningTime="2025-11-24 11:38:49.769814187 +0000 UTC m=+1340.700873826" Nov 24 11:38:49 crc kubenswrapper[4678]: I1124 11:38:49.789411 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" podStartSLOduration=2.78939234 podStartE2EDuration="2.78939234s" podCreationTimestamp="2025-11-24 11:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:38:49.783850262 +0000 UTC m=+1340.714909901" watchObservedRunningTime="2025-11-24 11:38:49.78939234 +0000 UTC m=+1340.720451979" Nov 24 11:38:50 crc kubenswrapper[4678]: I1124 11:38:50.719768 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b","Type":"ContainerStarted","Data":"bcd1c86b94a8fd4be4f8dcc65bb4013d50a478c0201a4a9eb405f4b13f535087"} Nov 24 11:38:50 crc kubenswrapper[4678]: I1124 11:38:50.720548 4678 scope.go:117] "RemoveContainer" containerID="689d5bd851ad3e08cf736d8d8037fb1133c8215234669e7b5448947c9ef8bbd8" Nov 24 11:38:50 crc kubenswrapper[4678]: I1124 11:38:50.720820 4678 scope.go:117] "RemoveContainer" containerID="620def8d7f6b39bb95595b1f08cf19270013b54878bdaa319fbebbcdcda25bae" Nov 24 11:38:50 crc kubenswrapper[4678]: E1124 11:38:50.721061 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-b8f4768f4-mzkhn_openstack(da9793e2-686b-4990-bebc-e221b3e14b9d)\"" pod="openstack/heat-api-b8f4768f4-mzkhn" podUID="da9793e2-686b-4990-bebc-e221b3e14b9d" Nov 24 11:38:50 crc kubenswrapper[4678]: E1124 11:38:50.721217 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-547bb9ff94-2m2k8_openstack(1f145b9b-b646-4e65-b709-367fd646614c)\"" pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" podUID="1f145b9b-b646-4e65-b709-367fd646614c" Nov 24 11:38:51 crc kubenswrapper[4678]: I1124 11:38:51.341017 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:38:52 crc kubenswrapper[4678]: I1124 11:38:52.033894 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" podUID="2a2a6860-a011-4427-bd09-bd77fe038151" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.216:8000/healthcheck\": read tcp 10.217.0.2:50284->10.217.0.216:8000: read: connection reset by peer" Nov 24 11:38:52 crc kubenswrapper[4678]: I1124 11:38:52.034329 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" podUID="2a2a6860-a011-4427-bd09-bd77fe038151" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.216:8000/healthcheck\": dial tcp 10.217.0.216:8000: connect: connection refused" Nov 24 11:38:52 crc kubenswrapper[4678]: I1124 11:38:52.753539 4678 generic.go:334] "Generic (PLEG): container finished" podID="2a2a6860-a011-4427-bd09-bd77fe038151" containerID="7439e6b86db188333a6f11c73e354ee8e879e98737245baae3f909667fc11936" exitCode=0 Nov 24 11:38:52 crc kubenswrapper[4678]: I1124 11:38:52.753875 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" event={"ID":"2a2a6860-a011-4427-bd09-bd77fe038151","Type":"ContainerDied","Data":"7439e6b86db188333a6f11c73e354ee8e879e98737245baae3f909667fc11936"} Nov 24 11:38:52 crc kubenswrapper[4678]: I1124 11:38:52.987328 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6694596475-t2mb7" podUID="e5736f93-57bc-4f43-a09e-7f417d8397b0" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.217:8004/healthcheck\": read tcp 10.217.0.2:56346->10.217.0.217:8004: read: connection reset by peer" Nov 24 11:38:53 crc kubenswrapper[4678]: I1124 11:38:53.703478 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 11:38:53 crc kubenswrapper[4678]: I1124 11:38:53.703821 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 11:38:53 crc kubenswrapper[4678]: I1124 11:38:53.737757 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 11:38:53 crc kubenswrapper[4678]: I1124 11:38:53.768630 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 11:38:53 crc kubenswrapper[4678]: I1124 11:38:53.781896 4678 generic.go:334] "Generic (PLEG): container finished" podID="e5736f93-57bc-4f43-a09e-7f417d8397b0" containerID="d43107ad88df0a46dc810d4ee3b02fb9d8a99cc87322126c87740a5262264228" exitCode=0 Nov 24 11:38:53 crc kubenswrapper[4678]: I1124 11:38:53.781981 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6694596475-t2mb7" event={"ID":"e5736f93-57bc-4f43-a09e-7f417d8397b0","Type":"ContainerDied","Data":"d43107ad88df0a46dc810d4ee3b02fb9d8a99cc87322126c87740a5262264228"} Nov 24 11:38:53 crc kubenswrapper[4678]: I1124 11:38:53.782221 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 11:38:53 crc kubenswrapper[4678]: I1124 11:38:53.782631 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 11:38:54 crc kubenswrapper[4678]: I1124 11:38:54.948167 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:38:55 crc kubenswrapper[4678]: I1124 11:38:55.420727 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" podUID="2a2a6860-a011-4427-bd09-bd77fe038151" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.216:8000/healthcheck\": dial tcp 10.217.0.216:8000: connect: connection refused" Nov 24 11:38:55 crc kubenswrapper[4678]: I1124 11:38:55.450275 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6694596475-t2mb7" podUID="e5736f93-57bc-4f43-a09e-7f417d8397b0" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.217:8004/healthcheck\": dial tcp 10.217.0.217:8004: connect: connection refused" Nov 24 11:38:55 crc kubenswrapper[4678]: I1124 11:38:55.914547 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 11:38:55 crc kubenswrapper[4678]: I1124 11:38:55.914955 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.195732 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.195792 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.240340 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.275924 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.551183 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.638961 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-config-data\") pod \"2a2a6860-a011-4427-bd09-bd77fe038151\" (UID: \"2a2a6860-a011-4427-bd09-bd77fe038151\") " Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.639022 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-config-data-custom\") pod \"2a2a6860-a011-4427-bd09-bd77fe038151\" (UID: \"2a2a6860-a011-4427-bd09-bd77fe038151\") " Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.639049 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twxd4\" (UniqueName: \"kubernetes.io/projected/2a2a6860-a011-4427-bd09-bd77fe038151-kube-api-access-twxd4\") pod \"2a2a6860-a011-4427-bd09-bd77fe038151\" (UID: \"2a2a6860-a011-4427-bd09-bd77fe038151\") " Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.639243 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-combined-ca-bundle\") pod \"2a2a6860-a011-4427-bd09-bd77fe038151\" (UID: \"2a2a6860-a011-4427-bd09-bd77fe038151\") " Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.644754 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2a2a6860-a011-4427-bd09-bd77fe038151" (UID: "2a2a6860-a011-4427-bd09-bd77fe038151"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.647814 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a2a6860-a011-4427-bd09-bd77fe038151-kube-api-access-twxd4" (OuterVolumeSpecName: "kube-api-access-twxd4") pod "2a2a6860-a011-4427-bd09-bd77fe038151" (UID: "2a2a6860-a011-4427-bd09-bd77fe038151"). InnerVolumeSpecName "kube-api-access-twxd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.741942 4678 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.742206 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twxd4\" (UniqueName: \"kubernetes.io/projected/2a2a6860-a011-4427-bd09-bd77fe038151-kube-api-access-twxd4\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.749958 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a2a6860-a011-4427-bd09-bd77fe038151" (UID: "2a2a6860-a011-4427-bd09-bd77fe038151"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.768862 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-config-data" (OuterVolumeSpecName: "config-data") pod "2a2a6860-a011-4427-bd09-bd77fe038151" (UID: "2a2a6860-a011-4427-bd09-bd77fe038151"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.844767 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.844798 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2a6860-a011-4427-bd09-bd77fe038151-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.845030 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.845072 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b58dbb476-qzjrl" event={"ID":"2a2a6860-a011-4427-bd09-bd77fe038151","Type":"ContainerDied","Data":"027c39d8078c5d93060356f30b6c6dde87060aa329c178063d9bebe5dedb5f32"} Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.845219 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.845236 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.845438 4678 scope.go:117] "RemoveContainer" containerID="7439e6b86db188333a6f11c73e354ee8e879e98737245baae3f909667fc11936" Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.902582 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6b58dbb476-qzjrl"] Nov 24 11:38:56 crc kubenswrapper[4678]: I1124 11:38:56.919160 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6b58dbb476-qzjrl"] Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.001040 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.163226 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-combined-ca-bundle\") pod \"e5736f93-57bc-4f43-a09e-7f417d8397b0\" (UID: \"e5736f93-57bc-4f43-a09e-7f417d8397b0\") " Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.163358 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-config-data-custom\") pod \"e5736f93-57bc-4f43-a09e-7f417d8397b0\" (UID: \"e5736f93-57bc-4f43-a09e-7f417d8397b0\") " Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.163391 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-config-data\") pod \"e5736f93-57bc-4f43-a09e-7f417d8397b0\" (UID: \"e5736f93-57bc-4f43-a09e-7f417d8397b0\") " Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.163462 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2jh7\" (UniqueName: \"kubernetes.io/projected/e5736f93-57bc-4f43-a09e-7f417d8397b0-kube-api-access-x2jh7\") pod \"e5736f93-57bc-4f43-a09e-7f417d8397b0\" (UID: \"e5736f93-57bc-4f43-a09e-7f417d8397b0\") " Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.171185 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5736f93-57bc-4f43-a09e-7f417d8397b0-kube-api-access-x2jh7" (OuterVolumeSpecName: "kube-api-access-x2jh7") pod "e5736f93-57bc-4f43-a09e-7f417d8397b0" (UID: "e5736f93-57bc-4f43-a09e-7f417d8397b0"). InnerVolumeSpecName "kube-api-access-x2jh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.172587 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e5736f93-57bc-4f43-a09e-7f417d8397b0" (UID: "e5736f93-57bc-4f43-a09e-7f417d8397b0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.217777 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5736f93-57bc-4f43-a09e-7f417d8397b0" (UID: "e5736f93-57bc-4f43-a09e-7f417d8397b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.233785 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-config-data" (OuterVolumeSpecName: "config-data") pod "e5736f93-57bc-4f43-a09e-7f417d8397b0" (UID: "e5736f93-57bc-4f43-a09e-7f417d8397b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.265913 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.265975 4678 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.265985 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5736f93-57bc-4f43-a09e-7f417d8397b0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.265994 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2jh7\" (UniqueName: \"kubernetes.io/projected/e5736f93-57bc-4f43-a09e-7f417d8397b0-kube-api-access-x2jh7\") on node \"crc\" DevicePath \"\"" Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.862689 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6694596475-t2mb7" event={"ID":"e5736f93-57bc-4f43-a09e-7f417d8397b0","Type":"ContainerDied","Data":"13fe472928e9e5f351e8c61450c60ab471c1e2c0f25dc5b02b9d7a75694f8f46"} Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.863021 4678 scope.go:117] "RemoveContainer" containerID="d43107ad88df0a46dc810d4ee3b02fb9d8a99cc87322126c87740a5262264228" Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.863176 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6694596475-t2mb7" Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.873778 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b","Type":"ContainerStarted","Data":"4689584c661ddc51fdce8ffa8c3d811b1a922b0ee19266acd1813aff713b7bd2"} Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.938432 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a2a6860-a011-4427-bd09-bd77fe038151" path="/var/lib/kubelet/pods/2a2a6860-a011-4427-bd09-bd77fe038151/volumes" Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.939042 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4s8zw" event={"ID":"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6","Type":"ContainerStarted","Data":"cd9bc4d5ad09d8e09bf66fb754b1f171c6113ad9c7cf61552f1c9c2d3dfa5132"} Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.939071 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6694596475-t2mb7"] Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.962075 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6694596475-t2mb7"] Nov 24 11:38:57 crc kubenswrapper[4678]: I1124 11:38:57.973321 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-4s8zw" podStartSLOduration=3.231884955 podStartE2EDuration="17.973305239s" podCreationTimestamp="2025-11-24 11:38:40 +0000 UTC" firstStartedPulling="2025-11-24 11:38:41.860491426 +0000 UTC m=+1332.791551065" lastFinishedPulling="2025-11-24 11:38:56.60191171 +0000 UTC m=+1347.532971349" observedRunningTime="2025-11-24 11:38:57.95912948 +0000 UTC m=+1348.890189119" watchObservedRunningTime="2025-11-24 11:38:57.973305239 +0000 UTC m=+1348.904364878" Nov 24 11:38:58 crc kubenswrapper[4678]: I1124 11:38:58.956511 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b","Type":"ContainerStarted","Data":"88a3dc87b13f513dfc4849257ae7723c23e2a60086fa9aefc11ca9710ebf7d88"} Nov 24 11:38:59 crc kubenswrapper[4678]: I1124 11:38:59.907757 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5736f93-57bc-4f43-a09e-7f417d8397b0" path="/var/lib/kubelet/pods/e5736f93-57bc-4f43-a09e-7f417d8397b0/volumes" Nov 24 11:38:59 crc kubenswrapper[4678]: I1124 11:38:59.969394 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b","Type":"ContainerStarted","Data":"620e46f2843e86f01ef484051644b5e050b6dea2682464c8a73bed185201998a"} Nov 24 11:38:59 crc kubenswrapper[4678]: I1124 11:38:59.971369 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:38:59 crc kubenswrapper[4678]: I1124 11:38:59.971605 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerName="sg-core" containerID="cri-o://88a3dc87b13f513dfc4849257ae7723c23e2a60086fa9aefc11ca9710ebf7d88" gracePeriod=30 Nov 24 11:38:59 crc kubenswrapper[4678]: I1124 11:38:59.971643 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerName="ceilometer-notification-agent" containerID="cri-o://4689584c661ddc51fdce8ffa8c3d811b1a922b0ee19266acd1813aff713b7bd2" gracePeriod=30 Nov 24 11:38:59 crc kubenswrapper[4678]: I1124 11:38:59.971695 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerName="proxy-httpd" containerID="cri-o://620e46f2843e86f01ef484051644b5e050b6dea2682464c8a73bed185201998a" gracePeriod=30 Nov 24 11:38:59 crc kubenswrapper[4678]: I1124 11:38:59.972098 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerName="ceilometer-central-agent" containerID="cri-o://bcd1c86b94a8fd4be4f8dcc65bb4013d50a478c0201a4a9eb405f4b13f535087" gracePeriod=30 Nov 24 11:39:00 crc kubenswrapper[4678]: I1124 11:39:00.001579 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.764492544 podStartE2EDuration="13.001563059s" podCreationTimestamp="2025-11-24 11:38:47 +0000 UTC" firstStartedPulling="2025-11-24 11:38:49.148140794 +0000 UTC m=+1340.079200433" lastFinishedPulling="2025-11-24 11:38:59.385211309 +0000 UTC m=+1350.316270948" observedRunningTime="2025-11-24 11:38:59.999256698 +0000 UTC m=+1350.930316337" watchObservedRunningTime="2025-11-24 11:39:00.001563059 +0000 UTC m=+1350.932622698" Nov 24 11:39:00 crc kubenswrapper[4678]: I1124 11:39:00.131417 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 11:39:00 crc kubenswrapper[4678]: I1124 11:39:00.131804 4678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 11:39:00 crc kubenswrapper[4678]: E1124 11:39:00.353286 4678 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0cecbb6_5a82_4c3e_9bd5_94db58a9f06b.slice/crio-88a3dc87b13f513dfc4849257ae7723c23e2a60086fa9aefc11ca9710ebf7d88.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0cecbb6_5a82_4c3e_9bd5_94db58a9f06b.slice/crio-conmon-88a3dc87b13f513dfc4849257ae7723c23e2a60086fa9aefc11ca9710ebf7d88.scope\": RecentStats: unable to find data in memory cache]" Nov 24 11:39:00 crc kubenswrapper[4678]: I1124 11:39:00.465406 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 11:39:00 crc kubenswrapper[4678]: I1124 11:39:00.787221 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:39:00 crc kubenswrapper[4678]: I1124 11:39:00.850121 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:39:00 crc kubenswrapper[4678]: I1124 11:39:00.853050 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-b8f4768f4-mzkhn"] Nov 24 11:39:00 crc kubenswrapper[4678]: I1124 11:39:00.963220 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-547bb9ff94-2m2k8"] Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.044567 4678 generic.go:334] "Generic (PLEG): container finished" podID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerID="620e46f2843e86f01ef484051644b5e050b6dea2682464c8a73bed185201998a" exitCode=0 Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.044597 4678 generic.go:334] "Generic (PLEG): container finished" podID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerID="88a3dc87b13f513dfc4849257ae7723c23e2a60086fa9aefc11ca9710ebf7d88" exitCode=2 Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.044604 4678 generic.go:334] "Generic (PLEG): container finished" podID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerID="4689584c661ddc51fdce8ffa8c3d811b1a922b0ee19266acd1813aff713b7bd2" exitCode=0 Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.044611 4678 generic.go:334] "Generic (PLEG): container finished" podID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerID="bcd1c86b94a8fd4be4f8dcc65bb4013d50a478c0201a4a9eb405f4b13f535087" exitCode=0 Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.045983 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b","Type":"ContainerDied","Data":"620e46f2843e86f01ef484051644b5e050b6dea2682464c8a73bed185201998a"} Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.046028 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b","Type":"ContainerDied","Data":"88a3dc87b13f513dfc4849257ae7723c23e2a60086fa9aefc11ca9710ebf7d88"} Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.046039 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b","Type":"ContainerDied","Data":"4689584c661ddc51fdce8ffa8c3d811b1a922b0ee19266acd1813aff713b7bd2"} Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.046047 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b","Type":"ContainerDied","Data":"bcd1c86b94a8fd4be4f8dcc65bb4013d50a478c0201a4a9eb405f4b13f535087"} Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.157503 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.194328 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-run-httpd\") pod \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.194388 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-sg-core-conf-yaml\") pod \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.194412 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-combined-ca-bundle\") pod \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.194526 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5fgl\" (UniqueName: \"kubernetes.io/projected/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-kube-api-access-n5fgl\") pod \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.194577 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-config-data\") pod \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.194723 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-scripts\") pod \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.194811 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-log-httpd\") pod \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\" (UID: \"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b\") " Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.196089 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" (UID: "b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.196122 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" (UID: "b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.201119 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-scripts" (OuterVolumeSpecName: "scripts") pod "b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" (UID: "b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.201903 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-kube-api-access-n5fgl" (OuterVolumeSpecName: "kube-api-access-n5fgl") pod "b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" (UID: "b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b"). InnerVolumeSpecName "kube-api-access-n5fgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.236782 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" (UID: "b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.301508 4678 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.301550 4678 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.301562 4678 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.301579 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5fgl\" (UniqueName: \"kubernetes.io/projected/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-kube-api-access-n5fgl\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.301590 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.316808 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" (UID: "b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.385533 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.388991 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-config-data" (OuterVolumeSpecName: "config-data") pod "b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" (UID: "b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.403835 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.403874 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.504025 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.506421 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmk2\" (UniqueName: \"kubernetes.io/projected/da9793e2-686b-4990-bebc-e221b3e14b9d-kube-api-access-6dmk2\") pod \"da9793e2-686b-4990-bebc-e221b3e14b9d\" (UID: \"da9793e2-686b-4990-bebc-e221b3e14b9d\") " Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.506549 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-combined-ca-bundle\") pod \"da9793e2-686b-4990-bebc-e221b3e14b9d\" (UID: \"da9793e2-686b-4990-bebc-e221b3e14b9d\") " Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.506790 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-config-data\") pod \"da9793e2-686b-4990-bebc-e221b3e14b9d\" (UID: \"da9793e2-686b-4990-bebc-e221b3e14b9d\") " Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.506850 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-config-data-custom\") pod \"da9793e2-686b-4990-bebc-e221b3e14b9d\" (UID: \"da9793e2-686b-4990-bebc-e221b3e14b9d\") " Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.511113 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da9793e2-686b-4990-bebc-e221b3e14b9d-kube-api-access-6dmk2" (OuterVolumeSpecName: "kube-api-access-6dmk2") pod "da9793e2-686b-4990-bebc-e221b3e14b9d" (UID: "da9793e2-686b-4990-bebc-e221b3e14b9d"). InnerVolumeSpecName "kube-api-access-6dmk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.522532 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "da9793e2-686b-4990-bebc-e221b3e14b9d" (UID: "da9793e2-686b-4990-bebc-e221b3e14b9d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.609633 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-config-data-custom\") pod \"1f145b9b-b646-4e65-b709-367fd646614c\" (UID: \"1f145b9b-b646-4e65-b709-367fd646614c\") " Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.609701 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6fc4\" (UniqueName: \"kubernetes.io/projected/1f145b9b-b646-4e65-b709-367fd646614c-kube-api-access-x6fc4\") pod \"1f145b9b-b646-4e65-b709-367fd646614c\" (UID: \"1f145b9b-b646-4e65-b709-367fd646614c\") " Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.609769 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-config-data\") pod \"1f145b9b-b646-4e65-b709-367fd646614c\" (UID: \"1f145b9b-b646-4e65-b709-367fd646614c\") " Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.610011 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-combined-ca-bundle\") pod \"1f145b9b-b646-4e65-b709-367fd646614c\" (UID: \"1f145b9b-b646-4e65-b709-367fd646614c\") " Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.610568 4678 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.610587 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dmk2\" (UniqueName: \"kubernetes.io/projected/da9793e2-686b-4990-bebc-e221b3e14b9d-kube-api-access-6dmk2\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.637823 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1f145b9b-b646-4e65-b709-367fd646614c" (UID: "1f145b9b-b646-4e65-b709-367fd646614c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.638054 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f145b9b-b646-4e65-b709-367fd646614c-kube-api-access-x6fc4" (OuterVolumeSpecName: "kube-api-access-x6fc4") pod "1f145b9b-b646-4e65-b709-367fd646614c" (UID: "1f145b9b-b646-4e65-b709-367fd646614c"). InnerVolumeSpecName "kube-api-access-x6fc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.640448 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da9793e2-686b-4990-bebc-e221b3e14b9d" (UID: "da9793e2-686b-4990-bebc-e221b3e14b9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.718001 4678 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.723391 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6fc4\" (UniqueName: \"kubernetes.io/projected/1f145b9b-b646-4e65-b709-367fd646614c-kube-api-access-x6fc4\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.723412 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.730335 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-config-data" (OuterVolumeSpecName: "config-data") pod "da9793e2-686b-4990-bebc-e221b3e14b9d" (UID: "da9793e2-686b-4990-bebc-e221b3e14b9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.759890 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f145b9b-b646-4e65-b709-367fd646614c" (UID: "1f145b9b-b646-4e65-b709-367fd646614c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.800893 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-config-data" (OuterVolumeSpecName: "config-data") pod "1f145b9b-b646-4e65-b709-367fd646614c" (UID: "1f145b9b-b646-4e65-b709-367fd646614c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.824996 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.825032 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9793e2-686b-4990-bebc-e221b3e14b9d-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:01 crc kubenswrapper[4678]: I1124 11:39:01.825042 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f145b9b-b646-4e65-b709-367fd646614c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.067266 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.067265 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-547bb9ff94-2m2k8" event={"ID":"1f145b9b-b646-4e65-b709-367fd646614c","Type":"ContainerDied","Data":"591ae31281621c4d80dce12b7d6c34e7b526e2c794f9cbdc11014adf412b7b7c"} Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.067419 4678 scope.go:117] "RemoveContainer" containerID="689d5bd851ad3e08cf736d8d8037fb1133c8215234669e7b5448947c9ef8bbd8" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.076829 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b","Type":"ContainerDied","Data":"8427c8e34c7118ad124f89af17ac93c9aebeece65d54a93eca9857c9221ae9b7"} Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.076967 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.084501 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b8f4768f4-mzkhn" event={"ID":"da9793e2-686b-4990-bebc-e221b3e14b9d","Type":"ContainerDied","Data":"d31ca745ec8c687b29aa6f551d19cea519ec313ab89e69efb044b0d48b664345"} Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.084575 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b8f4768f4-mzkhn" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.097222 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-547bb9ff94-2m2k8"] Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.108816 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-547bb9ff94-2m2k8"] Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.124732 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.125896 4678 scope.go:117] "RemoveContainer" containerID="620e46f2843e86f01ef484051644b5e050b6dea2682464c8a73bed185201998a" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.140038 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.155349 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-b8f4768f4-mzkhn"] Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.169040 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-b8f4768f4-mzkhn"] Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.184590 4678 scope.go:117] "RemoveContainer" containerID="88a3dc87b13f513dfc4849257ae7723c23e2a60086fa9aefc11ca9710ebf7d88" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.206846 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:02 crc kubenswrapper[4678]: E1124 11:39:02.207359 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerName="proxy-httpd" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207376 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerName="proxy-httpd" Nov 24 11:39:02 crc kubenswrapper[4678]: E1124 11:39:02.207403 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5736f93-57bc-4f43-a09e-7f417d8397b0" containerName="heat-api" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207409 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5736f93-57bc-4f43-a09e-7f417d8397b0" containerName="heat-api" Nov 24 11:39:02 crc kubenswrapper[4678]: E1124 11:39:02.207419 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f145b9b-b646-4e65-b709-367fd646614c" containerName="heat-cfnapi" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207425 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f145b9b-b646-4e65-b709-367fd646614c" containerName="heat-cfnapi" Nov 24 11:39:02 crc kubenswrapper[4678]: E1124 11:39:02.207437 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerName="ceilometer-central-agent" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207443 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerName="ceilometer-central-agent" Nov 24 11:39:02 crc kubenswrapper[4678]: E1124 11:39:02.207452 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9793e2-686b-4990-bebc-e221b3e14b9d" containerName="heat-api" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207458 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9793e2-686b-4990-bebc-e221b3e14b9d" containerName="heat-api" Nov 24 11:39:02 crc kubenswrapper[4678]: E1124 11:39:02.207471 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a2a6860-a011-4427-bd09-bd77fe038151" containerName="heat-cfnapi" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207476 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a2a6860-a011-4427-bd09-bd77fe038151" containerName="heat-cfnapi" Nov 24 11:39:02 crc kubenswrapper[4678]: E1124 11:39:02.207493 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerName="sg-core" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207499 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerName="sg-core" Nov 24 11:39:02 crc kubenswrapper[4678]: E1124 11:39:02.207512 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerName="ceilometer-notification-agent" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207518 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerName="ceilometer-notification-agent" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207742 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a2a6860-a011-4427-bd09-bd77fe038151" containerName="heat-cfnapi" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207758 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerName="ceilometer-central-agent" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207767 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerName="proxy-httpd" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207780 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f145b9b-b646-4e65-b709-367fd646614c" containerName="heat-cfnapi" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207796 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f145b9b-b646-4e65-b709-367fd646614c" containerName="heat-cfnapi" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207830 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerName="sg-core" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207839 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5736f93-57bc-4f43-a09e-7f417d8397b0" containerName="heat-api" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207852 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9793e2-686b-4990-bebc-e221b3e14b9d" containerName="heat-api" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207864 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9793e2-686b-4990-bebc-e221b3e14b9d" containerName="heat-api" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.207875 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" containerName="ceilometer-notification-agent" Nov 24 11:39:02 crc kubenswrapper[4678]: E1124 11:39:02.208060 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9793e2-686b-4990-bebc-e221b3e14b9d" containerName="heat-api" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.208067 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9793e2-686b-4990-bebc-e221b3e14b9d" containerName="heat-api" Nov 24 11:39:02 crc kubenswrapper[4678]: E1124 11:39:02.208100 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f145b9b-b646-4e65-b709-367fd646614c" containerName="heat-cfnapi" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.208106 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f145b9b-b646-4e65-b709-367fd646614c" containerName="heat-cfnapi" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.209921 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.216301 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.216362 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.220337 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.253624 4678 scope.go:117] "RemoveContainer" containerID="4689584c661ddc51fdce8ffa8c3d811b1a922b0ee19266acd1813aff713b7bd2" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.337883 4678 scope.go:117] "RemoveContainer" containerID="bcd1c86b94a8fd4be4f8dcc65bb4013d50a478c0201a4a9eb405f4b13f535087" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.339507 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-config-data\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.339547 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.339574 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxfn5\" (UniqueName: \"kubernetes.io/projected/c502b3fc-b151-4993-83f8-cc3fc77e8092-kube-api-access-xxfn5\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.339637 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c502b3fc-b151-4993-83f8-cc3fc77e8092-run-httpd\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.339701 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.339725 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c502b3fc-b151-4993-83f8-cc3fc77e8092-log-httpd\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.339793 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-scripts\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.409261 4678 scope.go:117] "RemoveContainer" containerID="620def8d7f6b39bb95595b1f08cf19270013b54878bdaa319fbebbcdcda25bae" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.441398 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.441447 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c502b3fc-b151-4993-83f8-cc3fc77e8092-log-httpd\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.441526 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-scripts\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.441568 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-config-data\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.441591 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.441610 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxfn5\" (UniqueName: \"kubernetes.io/projected/c502b3fc-b151-4993-83f8-cc3fc77e8092-kube-api-access-xxfn5\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.441688 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c502b3fc-b151-4993-83f8-cc3fc77e8092-run-httpd\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.442210 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c502b3fc-b151-4993-83f8-cc3fc77e8092-run-httpd\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.445043 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c502b3fc-b151-4993-83f8-cc3fc77e8092-log-httpd\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.453432 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-scripts\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.458323 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.458990 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-config-data\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.464179 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.513443 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxfn5\" (UniqueName: \"kubernetes.io/projected/c502b3fc-b151-4993-83f8-cc3fc77e8092-kube-api-access-xxfn5\") pod \"ceilometer-0\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " pod="openstack/ceilometer-0" Nov 24 11:39:02 crc kubenswrapper[4678]: I1124 11:39:02.543132 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:39:03 crc kubenswrapper[4678]: I1124 11:39:03.291100 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:03 crc kubenswrapper[4678]: I1124 11:39:03.924308 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f145b9b-b646-4e65-b709-367fd646614c" path="/var/lib/kubelet/pods/1f145b9b-b646-4e65-b709-367fd646614c/volumes" Nov 24 11:39:03 crc kubenswrapper[4678]: I1124 11:39:03.925408 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b" path="/var/lib/kubelet/pods/b0cecbb6-5a82-4c3e-9bd5-94db58a9f06b/volumes" Nov 24 11:39:03 crc kubenswrapper[4678]: I1124 11:39:03.926380 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da9793e2-686b-4990-bebc-e221b3e14b9d" path="/var/lib/kubelet/pods/da9793e2-686b-4990-bebc-e221b3e14b9d/volumes" Nov 24 11:39:04 crc kubenswrapper[4678]: I1124 11:39:04.184624 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c502b3fc-b151-4993-83f8-cc3fc77e8092","Type":"ContainerStarted","Data":"d88f3203c15797ba1d805a6fa243c3d4b18599db3adbe1a1fd2f04a950abe8b8"} Nov 24 11:39:04 crc kubenswrapper[4678]: I1124 11:39:04.184682 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c502b3fc-b151-4993-83f8-cc3fc77e8092","Type":"ContainerStarted","Data":"111bee1efdea13c4cfab43207678e5e88ad65166d749b4287f60d688aadf8ada"} Nov 24 11:39:04 crc kubenswrapper[4678]: I1124 11:39:04.499305 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:39:04 crc kubenswrapper[4678]: I1124 11:39:04.572544 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5b6d798f4-7gdft"] Nov 24 11:39:04 crc kubenswrapper[4678]: I1124 11:39:04.574512 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5b6d798f4-7gdft" podUID="59630821-44d7-4a76-873f-45ea27649b05" containerName="heat-engine" containerID="cri-o://fdaf4f069c8fb42c056351f3d37198802bbf6d8d0637b43b3bc2e12908ed58a7" gracePeriod=60 Nov 24 11:39:04 crc kubenswrapper[4678]: E1124 11:39:04.900992 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fdaf4f069c8fb42c056351f3d37198802bbf6d8d0637b43b3bc2e12908ed58a7" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 11:39:04 crc kubenswrapper[4678]: E1124 11:39:04.902392 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fdaf4f069c8fb42c056351f3d37198802bbf6d8d0637b43b3bc2e12908ed58a7" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 11:39:04 crc kubenswrapper[4678]: E1124 11:39:04.908379 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fdaf4f069c8fb42c056351f3d37198802bbf6d8d0637b43b3bc2e12908ed58a7" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 11:39:04 crc kubenswrapper[4678]: E1124 11:39:04.908440 4678 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5b6d798f4-7gdft" podUID="59630821-44d7-4a76-873f-45ea27649b05" containerName="heat-engine" Nov 24 11:39:05 crc kubenswrapper[4678]: I1124 11:39:05.196080 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c502b3fc-b151-4993-83f8-cc3fc77e8092","Type":"ContainerStarted","Data":"3b34042dc7ea60b304d27e8663bfe6a82abe9c375cd20e4b2ff0a457227d5462"} Nov 24 11:39:06 crc kubenswrapper[4678]: I1124 11:39:06.210584 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c502b3fc-b151-4993-83f8-cc3fc77e8092","Type":"ContainerStarted","Data":"3e97a1f4c474decab33beb9380171a0823512f6c4487b3187bb9e252fe9113ce"} Nov 24 11:39:08 crc kubenswrapper[4678]: I1124 11:39:08.251611 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c502b3fc-b151-4993-83f8-cc3fc77e8092","Type":"ContainerStarted","Data":"60cdc49469dba81bd2cf21b5bd0004ab327cf7bad4afdcf8a683ffde9d454c1b"} Nov 24 11:39:08 crc kubenswrapper[4678]: I1124 11:39:08.252095 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:39:08 crc kubenswrapper[4678]: I1124 11:39:08.277210 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.18673599 podStartE2EDuration="6.277186569s" podCreationTimestamp="2025-11-24 11:39:02 +0000 UTC" firstStartedPulling="2025-11-24 11:39:03.285143617 +0000 UTC m=+1354.216203256" lastFinishedPulling="2025-11-24 11:39:07.375594196 +0000 UTC m=+1358.306653835" observedRunningTime="2025-11-24 11:39:08.271756994 +0000 UTC m=+1359.202816623" watchObservedRunningTime="2025-11-24 11:39:08.277186569 +0000 UTC m=+1359.208246218" Nov 24 11:39:10 crc kubenswrapper[4678]: I1124 11:39:10.733137 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:10 crc kubenswrapper[4678]: I1124 11:39:10.734852 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerName="ceilometer-central-agent" containerID="cri-o://d88f3203c15797ba1d805a6fa243c3d4b18599db3adbe1a1fd2f04a950abe8b8" gracePeriod=30 Nov 24 11:39:10 crc kubenswrapper[4678]: I1124 11:39:10.735063 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerName="proxy-httpd" containerID="cri-o://60cdc49469dba81bd2cf21b5bd0004ab327cf7bad4afdcf8a683ffde9d454c1b" gracePeriod=30 Nov 24 11:39:10 crc kubenswrapper[4678]: I1124 11:39:10.735172 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerName="sg-core" containerID="cri-o://3e97a1f4c474decab33beb9380171a0823512f6c4487b3187bb9e252fe9113ce" gracePeriod=30 Nov 24 11:39:10 crc kubenswrapper[4678]: I1124 11:39:10.735280 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerName="ceilometer-notification-agent" containerID="cri-o://3b34042dc7ea60b304d27e8663bfe6a82abe9c375cd20e4b2ff0a457227d5462" gracePeriod=30 Nov 24 11:39:11 crc kubenswrapper[4678]: I1124 11:39:11.286056 4678 generic.go:334] "Generic (PLEG): container finished" podID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerID="60cdc49469dba81bd2cf21b5bd0004ab327cf7bad4afdcf8a683ffde9d454c1b" exitCode=0 Nov 24 11:39:11 crc kubenswrapper[4678]: I1124 11:39:11.286598 4678 generic.go:334] "Generic (PLEG): container finished" podID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerID="3e97a1f4c474decab33beb9380171a0823512f6c4487b3187bb9e252fe9113ce" exitCode=2 Nov 24 11:39:11 crc kubenswrapper[4678]: I1124 11:39:11.286697 4678 generic.go:334] "Generic (PLEG): container finished" podID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerID="3b34042dc7ea60b304d27e8663bfe6a82abe9c375cd20e4b2ff0a457227d5462" exitCode=0 Nov 24 11:39:11 crc kubenswrapper[4678]: I1124 11:39:11.286198 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c502b3fc-b151-4993-83f8-cc3fc77e8092","Type":"ContainerDied","Data":"60cdc49469dba81bd2cf21b5bd0004ab327cf7bad4afdcf8a683ffde9d454c1b"} Nov 24 11:39:11 crc kubenswrapper[4678]: I1124 11:39:11.286881 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c502b3fc-b151-4993-83f8-cc3fc77e8092","Type":"ContainerDied","Data":"3e97a1f4c474decab33beb9380171a0823512f6c4487b3187bb9e252fe9113ce"} Nov 24 11:39:11 crc kubenswrapper[4678]: I1124 11:39:11.286952 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c502b3fc-b151-4993-83f8-cc3fc77e8092","Type":"ContainerDied","Data":"3b34042dc7ea60b304d27e8663bfe6a82abe9c375cd20e4b2ff0a457227d5462"} Nov 24 11:39:13 crc kubenswrapper[4678]: I1124 11:39:13.306906 4678 generic.go:334] "Generic (PLEG): container finished" podID="33eb1a4f-f16f-474b-bb69-a3d6d87df9f6" containerID="cd9bc4d5ad09d8e09bf66fb754b1f171c6113ad9c7cf61552f1c9c2d3dfa5132" exitCode=0 Nov 24 11:39:13 crc kubenswrapper[4678]: I1124 11:39:13.306999 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4s8zw" event={"ID":"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6","Type":"ContainerDied","Data":"cd9bc4d5ad09d8e09bf66fb754b1f171c6113ad9c7cf61552f1c9c2d3dfa5132"} Nov 24 11:39:14 crc kubenswrapper[4678]: I1124 11:39:14.877147 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4s8zw" Nov 24 11:39:14 crc kubenswrapper[4678]: E1124 11:39:14.905258 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fdaf4f069c8fb42c056351f3d37198802bbf6d8d0637b43b3bc2e12908ed58a7" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 11:39:14 crc kubenswrapper[4678]: E1124 11:39:14.909297 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fdaf4f069c8fb42c056351f3d37198802bbf6d8d0637b43b3bc2e12908ed58a7" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 11:39:14 crc kubenswrapper[4678]: E1124 11:39:14.931267 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fdaf4f069c8fb42c056351f3d37198802bbf6d8d0637b43b3bc2e12908ed58a7" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 11:39:14 crc kubenswrapper[4678]: E1124 11:39:14.931337 4678 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5b6d798f4-7gdft" podUID="59630821-44d7-4a76-873f-45ea27649b05" containerName="heat-engine" Nov 24 11:39:14 crc kubenswrapper[4678]: I1124 11:39:14.983931 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-config-data\") pod \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\" (UID: \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\") " Nov 24 11:39:14 crc kubenswrapper[4678]: I1124 11:39:14.984093 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n2m6\" (UniqueName: \"kubernetes.io/projected/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-kube-api-access-8n2m6\") pod \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\" (UID: \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\") " Nov 24 11:39:14 crc kubenswrapper[4678]: I1124 11:39:14.984210 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-scripts\") pod \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\" (UID: \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\") " Nov 24 11:39:14 crc kubenswrapper[4678]: I1124 11:39:14.984250 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-combined-ca-bundle\") pod \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\" (UID: \"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6\") " Nov 24 11:39:14 crc kubenswrapper[4678]: I1124 11:39:14.990548 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-scripts" (OuterVolumeSpecName: "scripts") pod "33eb1a4f-f16f-474b-bb69-a3d6d87df9f6" (UID: "33eb1a4f-f16f-474b-bb69-a3d6d87df9f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.004900 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-kube-api-access-8n2m6" (OuterVolumeSpecName: "kube-api-access-8n2m6") pod "33eb1a4f-f16f-474b-bb69-a3d6d87df9f6" (UID: "33eb1a4f-f16f-474b-bb69-a3d6d87df9f6"). InnerVolumeSpecName "kube-api-access-8n2m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.025749 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-config-data" (OuterVolumeSpecName: "config-data") pod "33eb1a4f-f16f-474b-bb69-a3d6d87df9f6" (UID: "33eb1a4f-f16f-474b-bb69-a3d6d87df9f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.045616 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33eb1a4f-f16f-474b-bb69-a3d6d87df9f6" (UID: "33eb1a4f-f16f-474b-bb69-a3d6d87df9f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.087307 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n2m6\" (UniqueName: \"kubernetes.io/projected/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-kube-api-access-8n2m6\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.087340 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.087351 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.087359 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.343640 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4s8zw" event={"ID":"33eb1a4f-f16f-474b-bb69-a3d6d87df9f6","Type":"ContainerDied","Data":"aadb4af1810477e3a602834584c9fa43ca594c2a7a6848dfda70da3b301d0052"} Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.343778 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aadb4af1810477e3a602834584c9fa43ca594c2a7a6848dfda70da3b301d0052" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.343855 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4s8zw" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.445300 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 11:39:15 crc kubenswrapper[4678]: E1124 11:39:15.445769 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33eb1a4f-f16f-474b-bb69-a3d6d87df9f6" containerName="nova-cell0-conductor-db-sync" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.445787 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="33eb1a4f-f16f-474b-bb69-a3d6d87df9f6" containerName="nova-cell0-conductor-db-sync" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.451383 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="33eb1a4f-f16f-474b-bb69-a3d6d87df9f6" containerName="nova-cell0-conductor-db-sync" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.493348 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.497958 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.498157 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pl2nf" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.531883 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.599865 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36187042-d7c3-48fd-9bba-ac9967630015-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"36187042-d7c3-48fd-9bba-ac9967630015\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.599931 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwfmk\" (UniqueName: \"kubernetes.io/projected/36187042-d7c3-48fd-9bba-ac9967630015-kube-api-access-vwfmk\") pod \"nova-cell0-conductor-0\" (UID: \"36187042-d7c3-48fd-9bba-ac9967630015\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.600210 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36187042-d7c3-48fd-9bba-ac9967630015-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"36187042-d7c3-48fd-9bba-ac9967630015\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.702346 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwfmk\" (UniqueName: \"kubernetes.io/projected/36187042-d7c3-48fd-9bba-ac9967630015-kube-api-access-vwfmk\") pod \"nova-cell0-conductor-0\" (UID: \"36187042-d7c3-48fd-9bba-ac9967630015\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.702428 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36187042-d7c3-48fd-9bba-ac9967630015-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"36187042-d7c3-48fd-9bba-ac9967630015\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.702607 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36187042-d7c3-48fd-9bba-ac9967630015-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"36187042-d7c3-48fd-9bba-ac9967630015\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.708466 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36187042-d7c3-48fd-9bba-ac9967630015-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"36187042-d7c3-48fd-9bba-ac9967630015\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.711438 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36187042-d7c3-48fd-9bba-ac9967630015-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"36187042-d7c3-48fd-9bba-ac9967630015\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.720264 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwfmk\" (UniqueName: \"kubernetes.io/projected/36187042-d7c3-48fd-9bba-ac9967630015-kube-api-access-vwfmk\") pod \"nova-cell0-conductor-0\" (UID: \"36187042-d7c3-48fd-9bba-ac9967630015\") " pod="openstack/nova-cell0-conductor-0" Nov 24 11:39:15 crc kubenswrapper[4678]: I1124 11:39:15.840226 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 11:39:16 crc kubenswrapper[4678]: W1124 11:39:16.472281 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36187042_d7c3_48fd_9bba_ac9967630015.slice/crio-ffd501daf3ce20c6ec231c91bd6cda9f2d07c329162b32eb272283bb0a7fb2c9 WatchSource:0}: Error finding container ffd501daf3ce20c6ec231c91bd6cda9f2d07c329162b32eb272283bb0a7fb2c9: Status 404 returned error can't find the container with id ffd501daf3ce20c6ec231c91bd6cda9f2d07c329162b32eb272283bb0a7fb2c9 Nov 24 11:39:16 crc kubenswrapper[4678]: I1124 11:39:16.473003 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 11:39:17 crc kubenswrapper[4678]: I1124 11:39:17.379594 4678 generic.go:334] "Generic (PLEG): container finished" podID="59630821-44d7-4a76-873f-45ea27649b05" containerID="fdaf4f069c8fb42c056351f3d37198802bbf6d8d0637b43b3bc2e12908ed58a7" exitCode=0 Nov 24 11:39:17 crc kubenswrapper[4678]: I1124 11:39:17.379686 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b6d798f4-7gdft" event={"ID":"59630821-44d7-4a76-873f-45ea27649b05","Type":"ContainerDied","Data":"fdaf4f069c8fb42c056351f3d37198802bbf6d8d0637b43b3bc2e12908ed58a7"} Nov 24 11:39:17 crc kubenswrapper[4678]: I1124 11:39:17.381598 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"36187042-d7c3-48fd-9bba-ac9967630015","Type":"ContainerStarted","Data":"77f75798f3790196d444111bc62ffd9833441a05b20671248bc11c88b81544d4"} Nov 24 11:39:17 crc kubenswrapper[4678]: I1124 11:39:17.381649 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"36187042-d7c3-48fd-9bba-ac9967630015","Type":"ContainerStarted","Data":"ffd501daf3ce20c6ec231c91bd6cda9f2d07c329162b32eb272283bb0a7fb2c9"} Nov 24 11:39:17 crc kubenswrapper[4678]: I1124 11:39:17.383110 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 24 11:39:17 crc kubenswrapper[4678]: I1124 11:39:17.406438 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.406419112 podStartE2EDuration="2.406419112s" podCreationTimestamp="2025-11-24 11:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:39:17.402573588 +0000 UTC m=+1368.333633227" watchObservedRunningTime="2025-11-24 11:39:17.406419112 +0000 UTC m=+1368.337478751" Nov 24 11:39:17 crc kubenswrapper[4678]: I1124 11:39:17.858734 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:39:17 crc kubenswrapper[4678]: I1124 11:39:17.968721 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-combined-ca-bundle\") pod \"59630821-44d7-4a76-873f-45ea27649b05\" (UID: \"59630821-44d7-4a76-873f-45ea27649b05\") " Nov 24 11:39:17 crc kubenswrapper[4678]: I1124 11:39:17.968796 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhb5z\" (UniqueName: \"kubernetes.io/projected/59630821-44d7-4a76-873f-45ea27649b05-kube-api-access-hhb5z\") pod \"59630821-44d7-4a76-873f-45ea27649b05\" (UID: \"59630821-44d7-4a76-873f-45ea27649b05\") " Nov 24 11:39:17 crc kubenswrapper[4678]: I1124 11:39:17.968890 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-config-data\") pod \"59630821-44d7-4a76-873f-45ea27649b05\" (UID: \"59630821-44d7-4a76-873f-45ea27649b05\") " Nov 24 11:39:17 crc kubenswrapper[4678]: I1124 11:39:17.969028 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-config-data-custom\") pod \"59630821-44d7-4a76-873f-45ea27649b05\" (UID: \"59630821-44d7-4a76-873f-45ea27649b05\") " Nov 24 11:39:17 crc kubenswrapper[4678]: I1124 11:39:17.979711 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "59630821-44d7-4a76-873f-45ea27649b05" (UID: "59630821-44d7-4a76-873f-45ea27649b05"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:17 crc kubenswrapper[4678]: I1124 11:39:17.987916 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59630821-44d7-4a76-873f-45ea27649b05-kube-api-access-hhb5z" (OuterVolumeSpecName: "kube-api-access-hhb5z") pod "59630821-44d7-4a76-873f-45ea27649b05" (UID: "59630821-44d7-4a76-873f-45ea27649b05"). InnerVolumeSpecName "kube-api-access-hhb5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:18 crc kubenswrapper[4678]: I1124 11:39:18.034844 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59630821-44d7-4a76-873f-45ea27649b05" (UID: "59630821-44d7-4a76-873f-45ea27649b05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:18 crc kubenswrapper[4678]: I1124 11:39:18.055867 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-config-data" (OuterVolumeSpecName: "config-data") pod "59630821-44d7-4a76-873f-45ea27649b05" (UID: "59630821-44d7-4a76-873f-45ea27649b05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:18 crc kubenswrapper[4678]: I1124 11:39:18.071940 4678 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:18 crc kubenswrapper[4678]: I1124 11:39:18.071982 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:18 crc kubenswrapper[4678]: I1124 11:39:18.071996 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhb5z\" (UniqueName: \"kubernetes.io/projected/59630821-44d7-4a76-873f-45ea27649b05-kube-api-access-hhb5z\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:18 crc kubenswrapper[4678]: I1124 11:39:18.072010 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59630821-44d7-4a76-873f-45ea27649b05-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:18 crc kubenswrapper[4678]: I1124 11:39:18.395141 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b6d798f4-7gdft" event={"ID":"59630821-44d7-4a76-873f-45ea27649b05","Type":"ContainerDied","Data":"8c2a9bcb9e1947ed6b5b146290711b63149a58f7553b5ebfdadcf3b2e4de78c1"} Nov 24 11:39:18 crc kubenswrapper[4678]: I1124 11:39:18.395577 4678 scope.go:117] "RemoveContainer" containerID="fdaf4f069c8fb42c056351f3d37198802bbf6d8d0637b43b3bc2e12908ed58a7" Nov 24 11:39:18 crc kubenswrapper[4678]: I1124 11:39:18.395179 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b6d798f4-7gdft" Nov 24 11:39:18 crc kubenswrapper[4678]: I1124 11:39:18.442545 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5b6d798f4-7gdft"] Nov 24 11:39:18 crc kubenswrapper[4678]: I1124 11:39:18.457194 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5b6d798f4-7gdft"] Nov 24 11:39:19 crc kubenswrapper[4678]: I1124 11:39:19.930695 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59630821-44d7-4a76-873f-45ea27649b05" path="/var/lib/kubelet/pods/59630821-44d7-4a76-873f-45ea27649b05/volumes" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.149700 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.229646 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-scripts\") pod \"c502b3fc-b151-4993-83f8-cc3fc77e8092\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.229753 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-combined-ca-bundle\") pod \"c502b3fc-b151-4993-83f8-cc3fc77e8092\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.229932 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c502b3fc-b151-4993-83f8-cc3fc77e8092-log-httpd\") pod \"c502b3fc-b151-4993-83f8-cc3fc77e8092\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.229968 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfn5\" (UniqueName: \"kubernetes.io/projected/c502b3fc-b151-4993-83f8-cc3fc77e8092-kube-api-access-xxfn5\") pod \"c502b3fc-b151-4993-83f8-cc3fc77e8092\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.230011 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-config-data\") pod \"c502b3fc-b151-4993-83f8-cc3fc77e8092\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.230099 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-sg-core-conf-yaml\") pod \"c502b3fc-b151-4993-83f8-cc3fc77e8092\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.230149 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c502b3fc-b151-4993-83f8-cc3fc77e8092-run-httpd\") pod \"c502b3fc-b151-4993-83f8-cc3fc77e8092\" (UID: \"c502b3fc-b151-4993-83f8-cc3fc77e8092\") " Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.231132 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c502b3fc-b151-4993-83f8-cc3fc77e8092-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c502b3fc-b151-4993-83f8-cc3fc77e8092" (UID: "c502b3fc-b151-4993-83f8-cc3fc77e8092"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.231179 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c502b3fc-b151-4993-83f8-cc3fc77e8092-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c502b3fc-b151-4993-83f8-cc3fc77e8092" (UID: "c502b3fc-b151-4993-83f8-cc3fc77e8092"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.256956 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c502b3fc-b151-4993-83f8-cc3fc77e8092-kube-api-access-xxfn5" (OuterVolumeSpecName: "kube-api-access-xxfn5") pod "c502b3fc-b151-4993-83f8-cc3fc77e8092" (UID: "c502b3fc-b151-4993-83f8-cc3fc77e8092"). InnerVolumeSpecName "kube-api-access-xxfn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.270824 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-scripts" (OuterVolumeSpecName: "scripts") pod "c502b3fc-b151-4993-83f8-cc3fc77e8092" (UID: "c502b3fc-b151-4993-83f8-cc3fc77e8092"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.286996 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c502b3fc-b151-4993-83f8-cc3fc77e8092" (UID: "c502b3fc-b151-4993-83f8-cc3fc77e8092"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.334135 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.334205 4678 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c502b3fc-b151-4993-83f8-cc3fc77e8092-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.334224 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxfn5\" (UniqueName: \"kubernetes.io/projected/c502b3fc-b151-4993-83f8-cc3fc77e8092-kube-api-access-xxfn5\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.334243 4678 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.334262 4678 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c502b3fc-b151-4993-83f8-cc3fc77e8092-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.395828 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c502b3fc-b151-4993-83f8-cc3fc77e8092" (UID: "c502b3fc-b151-4993-83f8-cc3fc77e8092"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.423690 4678 generic.go:334] "Generic (PLEG): container finished" podID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerID="d88f3203c15797ba1d805a6fa243c3d4b18599db3adbe1a1fd2f04a950abe8b8" exitCode=0 Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.423776 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c502b3fc-b151-4993-83f8-cc3fc77e8092","Type":"ContainerDied","Data":"d88f3203c15797ba1d805a6fa243c3d4b18599db3adbe1a1fd2f04a950abe8b8"} Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.423800 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.423833 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c502b3fc-b151-4993-83f8-cc3fc77e8092","Type":"ContainerDied","Data":"111bee1efdea13c4cfab43207678e5e88ad65166d749b4287f60d688aadf8ada"} Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.423860 4678 scope.go:117] "RemoveContainer" containerID="60cdc49469dba81bd2cf21b5bd0004ab327cf7bad4afdcf8a683ffde9d454c1b" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.427489 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-config-data" (OuterVolumeSpecName: "config-data") pod "c502b3fc-b151-4993-83f8-cc3fc77e8092" (UID: "c502b3fc-b151-4993-83f8-cc3fc77e8092"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.436413 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.436449 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c502b3fc-b151-4993-83f8-cc3fc77e8092-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.449693 4678 scope.go:117] "RemoveContainer" containerID="3e97a1f4c474decab33beb9380171a0823512f6c4487b3187bb9e252fe9113ce" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.472394 4678 scope.go:117] "RemoveContainer" containerID="3b34042dc7ea60b304d27e8663bfe6a82abe9c375cd20e4b2ff0a457227d5462" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.502362 4678 scope.go:117] "RemoveContainer" containerID="d88f3203c15797ba1d805a6fa243c3d4b18599db3adbe1a1fd2f04a950abe8b8" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.532629 4678 scope.go:117] "RemoveContainer" containerID="60cdc49469dba81bd2cf21b5bd0004ab327cf7bad4afdcf8a683ffde9d454c1b" Nov 24 11:39:20 crc kubenswrapper[4678]: E1124 11:39:20.533361 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60cdc49469dba81bd2cf21b5bd0004ab327cf7bad4afdcf8a683ffde9d454c1b\": container with ID starting with 60cdc49469dba81bd2cf21b5bd0004ab327cf7bad4afdcf8a683ffde9d454c1b not found: ID does not exist" containerID="60cdc49469dba81bd2cf21b5bd0004ab327cf7bad4afdcf8a683ffde9d454c1b" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.533406 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60cdc49469dba81bd2cf21b5bd0004ab327cf7bad4afdcf8a683ffde9d454c1b"} err="failed to get container status \"60cdc49469dba81bd2cf21b5bd0004ab327cf7bad4afdcf8a683ffde9d454c1b\": rpc error: code = NotFound desc = could not find container \"60cdc49469dba81bd2cf21b5bd0004ab327cf7bad4afdcf8a683ffde9d454c1b\": container with ID starting with 60cdc49469dba81bd2cf21b5bd0004ab327cf7bad4afdcf8a683ffde9d454c1b not found: ID does not exist" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.533438 4678 scope.go:117] "RemoveContainer" containerID="3e97a1f4c474decab33beb9380171a0823512f6c4487b3187bb9e252fe9113ce" Nov 24 11:39:20 crc kubenswrapper[4678]: E1124 11:39:20.533976 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e97a1f4c474decab33beb9380171a0823512f6c4487b3187bb9e252fe9113ce\": container with ID starting with 3e97a1f4c474decab33beb9380171a0823512f6c4487b3187bb9e252fe9113ce not found: ID does not exist" containerID="3e97a1f4c474decab33beb9380171a0823512f6c4487b3187bb9e252fe9113ce" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.534029 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e97a1f4c474decab33beb9380171a0823512f6c4487b3187bb9e252fe9113ce"} err="failed to get container status \"3e97a1f4c474decab33beb9380171a0823512f6c4487b3187bb9e252fe9113ce\": rpc error: code = NotFound desc = could not find container \"3e97a1f4c474decab33beb9380171a0823512f6c4487b3187bb9e252fe9113ce\": container with ID starting with 3e97a1f4c474decab33beb9380171a0823512f6c4487b3187bb9e252fe9113ce not found: ID does not exist" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.534064 4678 scope.go:117] "RemoveContainer" containerID="3b34042dc7ea60b304d27e8663bfe6a82abe9c375cd20e4b2ff0a457227d5462" Nov 24 11:39:20 crc kubenswrapper[4678]: E1124 11:39:20.534499 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b34042dc7ea60b304d27e8663bfe6a82abe9c375cd20e4b2ff0a457227d5462\": container with ID starting with 3b34042dc7ea60b304d27e8663bfe6a82abe9c375cd20e4b2ff0a457227d5462 not found: ID does not exist" containerID="3b34042dc7ea60b304d27e8663bfe6a82abe9c375cd20e4b2ff0a457227d5462" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.534537 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b34042dc7ea60b304d27e8663bfe6a82abe9c375cd20e4b2ff0a457227d5462"} err="failed to get container status \"3b34042dc7ea60b304d27e8663bfe6a82abe9c375cd20e4b2ff0a457227d5462\": rpc error: code = NotFound desc = could not find container \"3b34042dc7ea60b304d27e8663bfe6a82abe9c375cd20e4b2ff0a457227d5462\": container with ID starting with 3b34042dc7ea60b304d27e8663bfe6a82abe9c375cd20e4b2ff0a457227d5462 not found: ID does not exist" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.534558 4678 scope.go:117] "RemoveContainer" containerID="d88f3203c15797ba1d805a6fa243c3d4b18599db3adbe1a1fd2f04a950abe8b8" Nov 24 11:39:20 crc kubenswrapper[4678]: E1124 11:39:20.534962 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d88f3203c15797ba1d805a6fa243c3d4b18599db3adbe1a1fd2f04a950abe8b8\": container with ID starting with d88f3203c15797ba1d805a6fa243c3d4b18599db3adbe1a1fd2f04a950abe8b8 not found: ID does not exist" containerID="d88f3203c15797ba1d805a6fa243c3d4b18599db3adbe1a1fd2f04a950abe8b8" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.534992 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d88f3203c15797ba1d805a6fa243c3d4b18599db3adbe1a1fd2f04a950abe8b8"} err="failed to get container status \"d88f3203c15797ba1d805a6fa243c3d4b18599db3adbe1a1fd2f04a950abe8b8\": rpc error: code = NotFound desc = could not find container \"d88f3203c15797ba1d805a6fa243c3d4b18599db3adbe1a1fd2f04a950abe8b8\": container with ID starting with d88f3203c15797ba1d805a6fa243c3d4b18599db3adbe1a1fd2f04a950abe8b8 not found: ID does not exist" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.769102 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.790854 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.853896 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:20 crc kubenswrapper[4678]: E1124 11:39:20.854859 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerName="ceilometer-central-agent" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.854879 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerName="ceilometer-central-agent" Nov 24 11:39:20 crc kubenswrapper[4678]: E1124 11:39:20.854916 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerName="ceilometer-notification-agent" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.854925 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerName="ceilometer-notification-agent" Nov 24 11:39:20 crc kubenswrapper[4678]: E1124 11:39:20.854958 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59630821-44d7-4a76-873f-45ea27649b05" containerName="heat-engine" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.854965 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="59630821-44d7-4a76-873f-45ea27649b05" containerName="heat-engine" Nov 24 11:39:20 crc kubenswrapper[4678]: E1124 11:39:20.855000 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerName="sg-core" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.855007 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerName="sg-core" Nov 24 11:39:20 crc kubenswrapper[4678]: E1124 11:39:20.855015 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerName="proxy-httpd" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.855024 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerName="proxy-httpd" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.855940 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerName="ceilometer-central-agent" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.855966 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerName="proxy-httpd" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.856015 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerName="sg-core" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.856036 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" containerName="ceilometer-notification-agent" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.856044 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="59630821-44d7-4a76-873f-45ea27649b05" containerName="heat-engine" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.858948 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.862430 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.862975 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.865562 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.959441 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9f00-47a2-4006-b096-0b7c23b03c38-run-httpd\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.959552 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9f00-47a2-4006-b096-0b7c23b03c38-log-httpd\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.959756 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsm48\" (UniqueName: \"kubernetes.io/projected/bd8e9f00-47a2-4006-b096-0b7c23b03c38-kube-api-access-gsm48\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.959973 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.960105 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-config-data\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.960156 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:20 crc kubenswrapper[4678]: I1124 11:39:20.960272 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-scripts\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.062954 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsm48\" (UniqueName: \"kubernetes.io/projected/bd8e9f00-47a2-4006-b096-0b7c23b03c38-kube-api-access-gsm48\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.063133 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.063209 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-config-data\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.063250 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.063336 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-scripts\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.063413 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9f00-47a2-4006-b096-0b7c23b03c38-run-httpd\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.063446 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9f00-47a2-4006-b096-0b7c23b03c38-log-httpd\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.065167 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9f00-47a2-4006-b096-0b7c23b03c38-log-httpd\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.065455 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9f00-47a2-4006-b096-0b7c23b03c38-run-httpd\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.071726 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.094687 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.096203 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-scripts\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.109317 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsm48\" (UniqueName: \"kubernetes.io/projected/bd8e9f00-47a2-4006-b096-0b7c23b03c38-kube-api-access-gsm48\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.119368 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-config-data\") pod \"ceilometer-0\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " pod="openstack/ceilometer-0" Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.189867 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.755501 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:21 crc kubenswrapper[4678]: W1124 11:39:21.762778 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd8e9f00_47a2_4006_b096_0b7c23b03c38.slice/crio-711ab7853f5e51bf26ee6fab1cd0d344ee0123bf131d854e10798bed7ad89bac WatchSource:0}: Error finding container 711ab7853f5e51bf26ee6fab1cd0d344ee0123bf131d854e10798bed7ad89bac: Status 404 returned error can't find the container with id 711ab7853f5e51bf26ee6fab1cd0d344ee0123bf131d854e10798bed7ad89bac Nov 24 11:39:21 crc kubenswrapper[4678]: I1124 11:39:21.914001 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c502b3fc-b151-4993-83f8-cc3fc77e8092" path="/var/lib/kubelet/pods/c502b3fc-b151-4993-83f8-cc3fc77e8092/volumes" Nov 24 11:39:22 crc kubenswrapper[4678]: I1124 11:39:22.460258 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd8e9f00-47a2-4006-b096-0b7c23b03c38","Type":"ContainerStarted","Data":"19124e20dfebdcd3eef63e203f88f495af0a4e394e869a8922af73d3ed7afeb6"} Nov 24 11:39:22 crc kubenswrapper[4678]: I1124 11:39:22.460624 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd8e9f00-47a2-4006-b096-0b7c23b03c38","Type":"ContainerStarted","Data":"711ab7853f5e51bf26ee6fab1cd0d344ee0123bf131d854e10798bed7ad89bac"} Nov 24 11:39:23 crc kubenswrapper[4678]: I1124 11:39:23.474347 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd8e9f00-47a2-4006-b096-0b7c23b03c38","Type":"ContainerStarted","Data":"0726975dfe072c63b563a3cb1d5d33fb13c610236e60235f131c573e336d4ed4"} Nov 24 11:39:24 crc kubenswrapper[4678]: I1124 11:39:24.510116 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd8e9f00-47a2-4006-b096-0b7c23b03c38","Type":"ContainerStarted","Data":"d70ea5270fe6cf8b1e5c8fcb2fab7ccf8638b0b8d700c07fd9c4690f91d3d482"} Nov 24 11:39:25 crc kubenswrapper[4678]: I1124 11:39:25.412370 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:25 crc kubenswrapper[4678]: I1124 11:39:25.525154 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd8e9f00-47a2-4006-b096-0b7c23b03c38","Type":"ContainerStarted","Data":"98afb4097603da7fbc52f15c314b815e59fe4361df6cf2500f3c65c89dd7104b"} Nov 24 11:39:25 crc kubenswrapper[4678]: I1124 11:39:25.525983 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:39:25 crc kubenswrapper[4678]: I1124 11:39:25.552532 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.309726839 podStartE2EDuration="5.55249936s" podCreationTimestamp="2025-11-24 11:39:20 +0000 UTC" firstStartedPulling="2025-11-24 11:39:21.767348585 +0000 UTC m=+1372.698408234" lastFinishedPulling="2025-11-24 11:39:25.010121116 +0000 UTC m=+1375.941180755" observedRunningTime="2025-11-24 11:39:25.546661974 +0000 UTC m=+1376.477721613" watchObservedRunningTime="2025-11-24 11:39:25.55249936 +0000 UTC m=+1376.483558999" Nov 24 11:39:25 crc kubenswrapper[4678]: I1124 11:39:25.873902 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.568357 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-frnfg"] Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.581579 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-frnfg" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.588126 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerName="ceilometer-central-agent" containerID="cri-o://19124e20dfebdcd3eef63e203f88f495af0a4e394e869a8922af73d3ed7afeb6" gracePeriod=30 Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.588381 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerName="proxy-httpd" containerID="cri-o://98afb4097603da7fbc52f15c314b815e59fe4361df6cf2500f3c65c89dd7104b" gracePeriod=30 Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.588449 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerName="sg-core" containerID="cri-o://d70ea5270fe6cf8b1e5c8fcb2fab7ccf8638b0b8d700c07fd9c4690f91d3d482" gracePeriod=30 Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.588494 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerName="ceilometer-notification-agent" containerID="cri-o://0726975dfe072c63b563a3cb1d5d33fb13c610236e60235f131c573e336d4ed4" gracePeriod=30 Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.605983 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.606188 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.616800 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-frnfg"] Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.713152 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-scripts\") pod \"nova-cell0-cell-mapping-frnfg\" (UID: \"7c07a289-92fa-4945-a0d6-fa2524b0492f\") " pod="openstack/nova-cell0-cell-mapping-frnfg" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.713223 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-config-data\") pod \"nova-cell0-cell-mapping-frnfg\" (UID: \"7c07a289-92fa-4945-a0d6-fa2524b0492f\") " pod="openstack/nova-cell0-cell-mapping-frnfg" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.713298 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-frnfg\" (UID: \"7c07a289-92fa-4945-a0d6-fa2524b0492f\") " pod="openstack/nova-cell0-cell-mapping-frnfg" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.713443 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6qjg\" (UniqueName: \"kubernetes.io/projected/7c07a289-92fa-4945-a0d6-fa2524b0492f-kube-api-access-h6qjg\") pod \"nova-cell0-cell-mapping-frnfg\" (UID: \"7c07a289-92fa-4945-a0d6-fa2524b0492f\") " pod="openstack/nova-cell0-cell-mapping-frnfg" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.724644 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.726268 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.729977 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.801808 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.812223 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.813584 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.815072 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-scripts\") pod \"nova-cell0-cell-mapping-frnfg\" (UID: \"7c07a289-92fa-4945-a0d6-fa2524b0492f\") " pod="openstack/nova-cell0-cell-mapping-frnfg" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.815112 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-config-data\") pod \"nova-cell0-cell-mapping-frnfg\" (UID: \"7c07a289-92fa-4945-a0d6-fa2524b0492f\") " pod="openstack/nova-cell0-cell-mapping-frnfg" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.815157 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f881b8b7-c793-4eea-8a59-5095344fce59-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f881b8b7-c793-4eea-8a59-5095344fce59\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.815198 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-frnfg\" (UID: \"7c07a289-92fa-4945-a0d6-fa2524b0492f\") " pod="openstack/nova-cell0-cell-mapping-frnfg" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.815257 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqr69\" (UniqueName: \"kubernetes.io/projected/f881b8b7-c793-4eea-8a59-5095344fce59-kube-api-access-nqr69\") pod \"nova-scheduler-0\" (UID: \"f881b8b7-c793-4eea-8a59-5095344fce59\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.815279 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6qjg\" (UniqueName: \"kubernetes.io/projected/7c07a289-92fa-4945-a0d6-fa2524b0492f-kube-api-access-h6qjg\") pod \"nova-cell0-cell-mapping-frnfg\" (UID: \"7c07a289-92fa-4945-a0d6-fa2524b0492f\") " pod="openstack/nova-cell0-cell-mapping-frnfg" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.815326 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f881b8b7-c793-4eea-8a59-5095344fce59-config-data\") pod \"nova-scheduler-0\" (UID: \"f881b8b7-c793-4eea-8a59-5095344fce59\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.817628 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.836512 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-scripts\") pod \"nova-cell0-cell-mapping-frnfg\" (UID: \"7c07a289-92fa-4945-a0d6-fa2524b0492f\") " pod="openstack/nova-cell0-cell-mapping-frnfg" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.844483 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6qjg\" (UniqueName: \"kubernetes.io/projected/7c07a289-92fa-4945-a0d6-fa2524b0492f-kube-api-access-h6qjg\") pod \"nova-cell0-cell-mapping-frnfg\" (UID: \"7c07a289-92fa-4945-a0d6-fa2524b0492f\") " pod="openstack/nova-cell0-cell-mapping-frnfg" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.851568 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.852352 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-frnfg\" (UID: \"7c07a289-92fa-4945-a0d6-fa2524b0492f\") " pod="openstack/nova-cell0-cell-mapping-frnfg" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.858321 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-config-data\") pod \"nova-cell0-cell-mapping-frnfg\" (UID: \"7c07a289-92fa-4945-a0d6-fa2524b0492f\") " pod="openstack/nova-cell0-cell-mapping-frnfg" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.864277 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.874010 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.876426 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.892192 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.908085 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.910040 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.915561 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.917268 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ce4708c-9dae-4c99-95c9-9ea6c62304c1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.917327 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqr69\" (UniqueName: \"kubernetes.io/projected/f881b8b7-c793-4eea-8a59-5095344fce59-kube-api-access-nqr69\") pod \"nova-scheduler-0\" (UID: \"f881b8b7-c793-4eea-8a59-5095344fce59\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.917394 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f881b8b7-c793-4eea-8a59-5095344fce59-config-data\") pod \"nova-scheduler-0\" (UID: \"f881b8b7-c793-4eea-8a59-5095344fce59\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.917502 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp45r\" (UniqueName: \"kubernetes.io/projected/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-kube-api-access-bp45r\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ce4708c-9dae-4c99-95c9-9ea6c62304c1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.917530 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ce4708c-9dae-4c99-95c9-9ea6c62304c1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.917551 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f881b8b7-c793-4eea-8a59-5095344fce59-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f881b8b7-c793-4eea-8a59-5095344fce59\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.937316 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f881b8b7-c793-4eea-8a59-5095344fce59-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f881b8b7-c793-4eea-8a59-5095344fce59\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.940229 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f881b8b7-c793-4eea-8a59-5095344fce59-config-data\") pod \"nova-scheduler-0\" (UID: \"f881b8b7-c793-4eea-8a59-5095344fce59\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:26 crc kubenswrapper[4678]: I1124 11:39:26.941221 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-frnfg" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.019164 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.021392 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqr69\" (UniqueName: \"kubernetes.io/projected/f881b8b7-c793-4eea-8a59-5095344fce59-kube-api-access-nqr69\") pod \"nova-scheduler-0\" (UID: \"f881b8b7-c793-4eea-8a59-5095344fce59\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.022986 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73198913-cbf2-4796-bdc0-562acaedacaa-config-data\") pod \"nova-metadata-0\" (UID: \"73198913-cbf2-4796-bdc0-562acaedacaa\") " pod="openstack/nova-metadata-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.023035 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ce4708c-9dae-4c99-95c9-9ea6c62304c1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.023599 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-config-data\") pod \"nova-api-0\" (UID: \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\") " pod="openstack/nova-api-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.023640 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73198913-cbf2-4796-bdc0-562acaedacaa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"73198913-cbf2-4796-bdc0-562acaedacaa\") " pod="openstack/nova-metadata-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.023710 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\") " pod="openstack/nova-api-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.023804 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73198913-cbf2-4796-bdc0-562acaedacaa-logs\") pod \"nova-metadata-0\" (UID: \"73198913-cbf2-4796-bdc0-562acaedacaa\") " pod="openstack/nova-metadata-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.023864 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndfhr\" (UniqueName: \"kubernetes.io/projected/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-kube-api-access-ndfhr\") pod \"nova-api-0\" (UID: \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\") " pod="openstack/nova-api-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.023903 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-logs\") pod \"nova-api-0\" (UID: \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\") " pod="openstack/nova-api-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.023954 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp45r\" (UniqueName: \"kubernetes.io/projected/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-kube-api-access-bp45r\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ce4708c-9dae-4c99-95c9-9ea6c62304c1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.023984 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ce4708c-9dae-4c99-95c9-9ea6c62304c1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.024089 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jkmz\" (UniqueName: \"kubernetes.io/projected/73198913-cbf2-4796-bdc0-562acaedacaa-kube-api-access-4jkmz\") pod \"nova-metadata-0\" (UID: \"73198913-cbf2-4796-bdc0-562acaedacaa\") " pod="openstack/nova-metadata-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.036086 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ce4708c-9dae-4c99-95c9-9ea6c62304c1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.037582 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ce4708c-9dae-4c99-95c9-9ea6c62304c1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.073728 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.082295 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp45r\" (UniqueName: \"kubernetes.io/projected/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-kube-api-access-bp45r\") pod \"nova-cell1-novncproxy-0\" (UID: \"2ce4708c-9dae-4c99-95c9-9ea6c62304c1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.110433 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.131984 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-tltnp"] Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.134078 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.149978 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73198913-cbf2-4796-bdc0-562acaedacaa-logs\") pod \"nova-metadata-0\" (UID: \"73198913-cbf2-4796-bdc0-562acaedacaa\") " pod="openstack/nova-metadata-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.150043 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndfhr\" (UniqueName: \"kubernetes.io/projected/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-kube-api-access-ndfhr\") pod \"nova-api-0\" (UID: \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\") " pod="openstack/nova-api-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.150081 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-logs\") pod \"nova-api-0\" (UID: \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\") " pod="openstack/nova-api-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.150172 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jkmz\" (UniqueName: \"kubernetes.io/projected/73198913-cbf2-4796-bdc0-562acaedacaa-kube-api-access-4jkmz\") pod \"nova-metadata-0\" (UID: \"73198913-cbf2-4796-bdc0-562acaedacaa\") " pod="openstack/nova-metadata-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.150198 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73198913-cbf2-4796-bdc0-562acaedacaa-config-data\") pod \"nova-metadata-0\" (UID: \"73198913-cbf2-4796-bdc0-562acaedacaa\") " pod="openstack/nova-metadata-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.150237 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-config-data\") pod \"nova-api-0\" (UID: \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\") " pod="openstack/nova-api-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.150256 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73198913-cbf2-4796-bdc0-562acaedacaa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"73198913-cbf2-4796-bdc0-562acaedacaa\") " pod="openstack/nova-metadata-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.150295 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\") " pod="openstack/nova-api-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.151369 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-logs\") pod \"nova-api-0\" (UID: \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\") " pod="openstack/nova-api-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.151598 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73198913-cbf2-4796-bdc0-562acaedacaa-logs\") pod \"nova-metadata-0\" (UID: \"73198913-cbf2-4796-bdc0-562acaedacaa\") " pod="openstack/nova-metadata-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.175374 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73198913-cbf2-4796-bdc0-562acaedacaa-config-data\") pod \"nova-metadata-0\" (UID: \"73198913-cbf2-4796-bdc0-562acaedacaa\") " pod="openstack/nova-metadata-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.178092 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndfhr\" (UniqueName: \"kubernetes.io/projected/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-kube-api-access-ndfhr\") pod \"nova-api-0\" (UID: \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\") " pod="openstack/nova-api-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.200848 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73198913-cbf2-4796-bdc0-562acaedacaa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"73198913-cbf2-4796-bdc0-562acaedacaa\") " pod="openstack/nova-metadata-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.209946 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-tltnp"] Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.220645 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jkmz\" (UniqueName: \"kubernetes.io/projected/73198913-cbf2-4796-bdc0-562acaedacaa-kube-api-access-4jkmz\") pod \"nova-metadata-0\" (UID: \"73198913-cbf2-4796-bdc0-562acaedacaa\") " pod="openstack/nova-metadata-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.220743 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-config-data\") pod \"nova-api-0\" (UID: \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\") " pod="openstack/nova-api-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.221644 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\") " pod="openstack/nova-api-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.252683 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.271870 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.271960 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.272000 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-config\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.272042 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.272091 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g85zn\" (UniqueName: \"kubernetes.io/projected/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-kube-api-access-g85zn\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.272166 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.376814 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.377144 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.384746 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.385829 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.386773 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.386869 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-config\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.386952 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.387010 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g85zn\" (UniqueName: \"kubernetes.io/projected/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-kube-api-access-g85zn\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.387438 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.388086 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.388402 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-config\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.425227 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g85zn\" (UniqueName: \"kubernetes.io/projected/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-kube-api-access-g85zn\") pod \"dnsmasq-dns-5fbc4d444f-tltnp\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.493872 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.574443 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.662455 4678 generic.go:334] "Generic (PLEG): container finished" podID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerID="d70ea5270fe6cf8b1e5c8fcb2fab7ccf8638b0b8d700c07fd9c4690f91d3d482" exitCode=2 Nov 24 11:39:27 crc kubenswrapper[4678]: I1124 11:39:27.662504 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd8e9f00-47a2-4006-b096-0b7c23b03c38","Type":"ContainerDied","Data":"d70ea5270fe6cf8b1e5c8fcb2fab7ccf8638b0b8d700c07fd9c4690f91d3d482"} Nov 24 11:39:28 crc kubenswrapper[4678]: I1124 11:39:28.164982 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:39:28 crc kubenswrapper[4678]: I1124 11:39:28.224285 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-frnfg"] Nov 24 11:39:28 crc kubenswrapper[4678]: I1124 11:39:28.233011 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:39:28 crc kubenswrapper[4678]: W1124 11:39:28.247907 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c07a289_92fa_4945_a0d6_fa2524b0492f.slice/crio-bd3fdc4010ba8cdd8a1c8c90449c5103cff4906a677e289959bfffbea588cc21 WatchSource:0}: Error finding container bd3fdc4010ba8cdd8a1c8c90449c5103cff4906a677e289959bfffbea588cc21: Status 404 returned error can't find the container with id bd3fdc4010ba8cdd8a1c8c90449c5103cff4906a677e289959bfffbea588cc21 Nov 24 11:39:28 crc kubenswrapper[4678]: I1124 11:39:28.606570 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:39:28 crc kubenswrapper[4678]: W1124 11:39:28.617641 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73198913_cbf2_4796_bdc0_562acaedacaa.slice/crio-7cedcd30c745739eef42fb7b4b9e5e5f85a09669f9d6dd1ac6985ac19f6acdcb WatchSource:0}: Error finding container 7cedcd30c745739eef42fb7b4b9e5e5f85a09669f9d6dd1ac6985ac19f6acdcb: Status 404 returned error can't find the container with id 7cedcd30c745739eef42fb7b4b9e5e5f85a09669f9d6dd1ac6985ac19f6acdcb Nov 24 11:39:28 crc kubenswrapper[4678]: I1124 11:39:28.681741 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:39:28 crc kubenswrapper[4678]: I1124 11:39:28.709333 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"73198913-cbf2-4796-bdc0-562acaedacaa","Type":"ContainerStarted","Data":"7cedcd30c745739eef42fb7b4b9e5e5f85a09669f9d6dd1ac6985ac19f6acdcb"} Nov 24 11:39:28 crc kubenswrapper[4678]: I1124 11:39:28.714930 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f881b8b7-c793-4eea-8a59-5095344fce59","Type":"ContainerStarted","Data":"4e3ea862ce759879d499abafe94e45d12339ef44d2bfb88163838f2a0176aebf"} Nov 24 11:39:28 crc kubenswrapper[4678]: I1124 11:39:28.736307 4678 generic.go:334] "Generic (PLEG): container finished" podID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerID="98afb4097603da7fbc52f15c314b815e59fe4361df6cf2500f3c65c89dd7104b" exitCode=0 Nov 24 11:39:28 crc kubenswrapper[4678]: I1124 11:39:28.736344 4678 generic.go:334] "Generic (PLEG): container finished" podID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerID="0726975dfe072c63b563a3cb1d5d33fb13c610236e60235f131c573e336d4ed4" exitCode=0 Nov 24 11:39:28 crc kubenswrapper[4678]: I1124 11:39:28.736404 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd8e9f00-47a2-4006-b096-0b7c23b03c38","Type":"ContainerDied","Data":"98afb4097603da7fbc52f15c314b815e59fe4361df6cf2500f3c65c89dd7104b"} Nov 24 11:39:28 crc kubenswrapper[4678]: I1124 11:39:28.736434 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd8e9f00-47a2-4006-b096-0b7c23b03c38","Type":"ContainerDied","Data":"0726975dfe072c63b563a3cb1d5d33fb13c610236e60235f131c573e336d4ed4"} Nov 24 11:39:28 crc kubenswrapper[4678]: I1124 11:39:28.742280 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2ce4708c-9dae-4c99-95c9-9ea6c62304c1","Type":"ContainerStarted","Data":"f5341d1ed1c20d47b9524824f6185723f8d99350bbcf1bbd7d08265f8c7ab87f"} Nov 24 11:39:28 crc kubenswrapper[4678]: I1124 11:39:28.753024 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-frnfg" event={"ID":"7c07a289-92fa-4945-a0d6-fa2524b0492f","Type":"ContainerStarted","Data":"b841e9b8aba88f1d070e9f7584f8b758b33955b65305660616f6b2d6ccffc3f1"} Nov 24 11:39:28 crc kubenswrapper[4678]: I1124 11:39:28.753126 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-frnfg" event={"ID":"7c07a289-92fa-4945-a0d6-fa2524b0492f","Type":"ContainerStarted","Data":"bd3fdc4010ba8cdd8a1c8c90449c5103cff4906a677e289959bfffbea588cc21"} Nov 24 11:39:28 crc kubenswrapper[4678]: I1124 11:39:28.778415 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-tltnp"] Nov 24 11:39:28 crc kubenswrapper[4678]: I1124 11:39:28.809106 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-frnfg" podStartSLOduration=2.80907224 podStartE2EDuration="2.80907224s" podCreationTimestamp="2025-11-24 11:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:39:28.808782943 +0000 UTC m=+1379.739842582" watchObservedRunningTime="2025-11-24 11:39:28.80907224 +0000 UTC m=+1379.740131879" Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.586061 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mptk6"] Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.588080 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mptk6" Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.593346 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.593768 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.596590 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mptk6"] Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.674301 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-config-data\") pod \"nova-cell1-conductor-db-sync-mptk6\" (UID: \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\") " pod="openstack/nova-cell1-conductor-db-sync-mptk6" Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.674616 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mptk6\" (UID: \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\") " pod="openstack/nova-cell1-conductor-db-sync-mptk6" Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.674683 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjxtt\" (UniqueName: \"kubernetes.io/projected/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-kube-api-access-vjxtt\") pod \"nova-cell1-conductor-db-sync-mptk6\" (UID: \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\") " pod="openstack/nova-cell1-conductor-db-sync-mptk6" Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.674924 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-scripts\") pod \"nova-cell1-conductor-db-sync-mptk6\" (UID: \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\") " pod="openstack/nova-cell1-conductor-db-sync-mptk6" Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.772935 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e7def9d-7b7b-4ce9-a8eb-e5736e671100","Type":"ContainerStarted","Data":"007eb9fb66fdc02f0dad418f6d937280658ea69f3d43ffc44fb6c527dd4fd427"} Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.777155 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-scripts\") pod \"nova-cell1-conductor-db-sync-mptk6\" (UID: \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\") " pod="openstack/nova-cell1-conductor-db-sync-mptk6" Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.777310 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-config-data\") pod \"nova-cell1-conductor-db-sync-mptk6\" (UID: \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\") " pod="openstack/nova-cell1-conductor-db-sync-mptk6" Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.778187 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mptk6\" (UID: \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\") " pod="openstack/nova-cell1-conductor-db-sync-mptk6" Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.778238 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjxtt\" (UniqueName: \"kubernetes.io/projected/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-kube-api-access-vjxtt\") pod \"nova-cell1-conductor-db-sync-mptk6\" (UID: \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\") " pod="openstack/nova-cell1-conductor-db-sync-mptk6" Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.790974 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-scripts\") pod \"nova-cell1-conductor-db-sync-mptk6\" (UID: \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\") " pod="openstack/nova-cell1-conductor-db-sync-mptk6" Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.807373 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-config-data\") pod \"nova-cell1-conductor-db-sync-mptk6\" (UID: \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\") " pod="openstack/nova-cell1-conductor-db-sync-mptk6" Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.808019 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mptk6\" (UID: \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\") " pod="openstack/nova-cell1-conductor-db-sync-mptk6" Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.810902 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjxtt\" (UniqueName: \"kubernetes.io/projected/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-kube-api-access-vjxtt\") pod \"nova-cell1-conductor-db-sync-mptk6\" (UID: \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\") " pod="openstack/nova-cell1-conductor-db-sync-mptk6" Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.843960 4678 generic.go:334] "Generic (PLEG): container finished" podID="a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3" containerID="89e1fc15624953c6271982051e9e822b38de30c6c963a2abc003b626e9f02ef7" exitCode=0 Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.845704 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" event={"ID":"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3","Type":"ContainerDied","Data":"89e1fc15624953c6271982051e9e822b38de30c6c963a2abc003b626e9f02ef7"} Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.845739 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" event={"ID":"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3","Type":"ContainerStarted","Data":"ee6d1d1540d7b2ccce21763a093b9a59b05293a0e974ad580e4872a07dab9b5c"} Nov 24 11:39:29 crc kubenswrapper[4678]: I1124 11:39:29.945163 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mptk6" Nov 24 11:39:30 crc kubenswrapper[4678]: I1124 11:39:30.582726 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mptk6"] Nov 24 11:39:30 crc kubenswrapper[4678]: I1124 11:39:30.664652 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:39:30 crc kubenswrapper[4678]: I1124 11:39:30.680204 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:39:30 crc kubenswrapper[4678]: I1124 11:39:30.862231 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" event={"ID":"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3","Type":"ContainerStarted","Data":"8132111ec530ab0cfc3c88d6845ed1b264ade70a007a85f7dfc5c84046f38aad"} Nov 24 11:39:30 crc kubenswrapper[4678]: I1124 11:39:30.863410 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:30 crc kubenswrapper[4678]: I1124 11:39:30.901499 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" podStartSLOduration=4.901481256 podStartE2EDuration="4.901481256s" podCreationTimestamp="2025-11-24 11:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:39:30.890937475 +0000 UTC m=+1381.821997114" watchObservedRunningTime="2025-11-24 11:39:30.901481256 +0000 UTC m=+1381.832540895" Nov 24 11:39:31 crc kubenswrapper[4678]: I1124 11:39:31.881851 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mptk6" event={"ID":"0a390c1f-e5b4-47a0-a9e8-a9979475fbab","Type":"ContainerStarted","Data":"56f09f217d3afa7199039697e00027c2f08cb1f214ae4d1c3a76fe3f7f0c1daa"} Nov 24 11:39:33 crc kubenswrapper[4678]: I1124 11:39:33.930394 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f881b8b7-c793-4eea-8a59-5095344fce59","Type":"ContainerStarted","Data":"9006cdc8b802f5a4126dd2f5eb2f548e98d008124214e5f8647700e6c8cdda53"} Nov 24 11:39:33 crc kubenswrapper[4678]: I1124 11:39:33.950019 4678 generic.go:334] "Generic (PLEG): container finished" podID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerID="19124e20dfebdcd3eef63e203f88f495af0a4e394e869a8922af73d3ed7afeb6" exitCode=0 Nov 24 11:39:33 crc kubenswrapper[4678]: I1124 11:39:33.950104 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd8e9f00-47a2-4006-b096-0b7c23b03c38","Type":"ContainerDied","Data":"19124e20dfebdcd3eef63e203f88f495af0a4e394e869a8922af73d3ed7afeb6"} Nov 24 11:39:33 crc kubenswrapper[4678]: I1124 11:39:33.953740 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2ce4708c-9dae-4c99-95c9-9ea6c62304c1","Type":"ContainerStarted","Data":"0b2fedda53c1f54677620332e09af28b054980461994f69e89c2358c219c443c"} Nov 24 11:39:33 crc kubenswrapper[4678]: I1124 11:39:33.954779 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="2ce4708c-9dae-4c99-95c9-9ea6c62304c1" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://0b2fedda53c1f54677620332e09af28b054980461994f69e89c2358c219c443c" gracePeriod=30 Nov 24 11:39:33 crc kubenswrapper[4678]: I1124 11:39:33.955372 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.467324244 podStartE2EDuration="7.955354546s" podCreationTimestamp="2025-11-24 11:39:26 +0000 UTC" firstStartedPulling="2025-11-24 11:39:28.253940835 +0000 UTC m=+1379.185000474" lastFinishedPulling="2025-11-24 11:39:32.741971137 +0000 UTC m=+1383.673030776" observedRunningTime="2025-11-24 11:39:33.952459738 +0000 UTC m=+1384.883519377" watchObservedRunningTime="2025-11-24 11:39:33.955354546 +0000 UTC m=+1384.886414185" Nov 24 11:39:33 crc kubenswrapper[4678]: I1124 11:39:33.982975 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"73198913-cbf2-4796-bdc0-562acaedacaa","Type":"ContainerStarted","Data":"ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09"} Nov 24 11:39:33 crc kubenswrapper[4678]: I1124 11:39:33.983024 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"73198913-cbf2-4796-bdc0-562acaedacaa","Type":"ContainerStarted","Data":"69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e"} Nov 24 11:39:33 crc kubenswrapper[4678]: I1124 11:39:33.983166 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="73198913-cbf2-4796-bdc0-562acaedacaa" containerName="nova-metadata-log" containerID="cri-o://69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e" gracePeriod=30 Nov 24 11:39:33 crc kubenswrapper[4678]: I1124 11:39:33.983763 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="73198913-cbf2-4796-bdc0-562acaedacaa" containerName="nova-metadata-metadata" containerID="cri-o://ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09" gracePeriod=30 Nov 24 11:39:33 crc kubenswrapper[4678]: I1124 11:39:33.990195 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.419590397 podStartE2EDuration="7.990168227s" podCreationTimestamp="2025-11-24 11:39:26 +0000 UTC" firstStartedPulling="2025-11-24 11:39:28.136261597 +0000 UTC m=+1379.067321236" lastFinishedPulling="2025-11-24 11:39:32.706839427 +0000 UTC m=+1383.637899066" observedRunningTime="2025-11-24 11:39:33.975148256 +0000 UTC m=+1384.906207895" watchObservedRunningTime="2025-11-24 11:39:33.990168227 +0000 UTC m=+1384.921227866" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:33.997878 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e7def9d-7b7b-4ce9-a8eb-e5736e671100","Type":"ContainerStarted","Data":"4b23073c074a48f6e9e900563d1d17dc3c8f96e95a42fa5359bf4b1e63da140e"} Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:33.997921 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e7def9d-7b7b-4ce9-a8eb-e5736e671100","Type":"ContainerStarted","Data":"0b27460ca5f4c19bec1f79417c04b4806b15761578c31189d2483df4b4810c52"} Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.005831 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mptk6" event={"ID":"0a390c1f-e5b4-47a0-a9e8-a9979475fbab","Type":"ContainerStarted","Data":"b71e95ad1773190481a5e9b395e2aa833103646e5c626528943357a7946244db"} Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.029905 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.884341735 podStartE2EDuration="8.029885179s" podCreationTimestamp="2025-11-24 11:39:26 +0000 UTC" firstStartedPulling="2025-11-24 11:39:28.62324513 +0000 UTC m=+1379.554304769" lastFinishedPulling="2025-11-24 11:39:32.768788574 +0000 UTC m=+1383.699848213" observedRunningTime="2025-11-24 11:39:34.0179509 +0000 UTC m=+1384.949010539" watchObservedRunningTime="2025-11-24 11:39:34.029885179 +0000 UTC m=+1384.960944818" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.052946 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-mptk6" podStartSLOduration=5.052923785 podStartE2EDuration="5.052923785s" podCreationTimestamp="2025-11-24 11:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:39:34.043080662 +0000 UTC m=+1384.974140301" watchObservedRunningTime="2025-11-24 11:39:34.052923785 +0000 UTC m=+1384.983983424" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.082755 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.995664153 podStartE2EDuration="8.082736123s" podCreationTimestamp="2025-11-24 11:39:26 +0000 UTC" firstStartedPulling="2025-11-24 11:39:28.712287092 +0000 UTC m=+1379.643346731" lastFinishedPulling="2025-11-24 11:39:32.799359062 +0000 UTC m=+1383.730418701" observedRunningTime="2025-11-24 11:39:34.063139349 +0000 UTC m=+1384.994198998" watchObservedRunningTime="2025-11-24 11:39:34.082736123 +0000 UTC m=+1385.013795762" Nov 24 11:39:34 crc kubenswrapper[4678]: E1124 11:39:34.263505 4678 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd8e9f00_47a2_4006_b096_0b7c23b03c38.slice/crio-19124e20dfebdcd3eef63e203f88f495af0a4e394e869a8922af73d3ed7afeb6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd8e9f00_47a2_4006_b096_0b7c23b03c38.slice/crio-conmon-19124e20dfebdcd3eef63e203f88f495af0a4e394e869a8922af73d3ed7afeb6.scope\": RecentStats: unable to find data in memory cache]" Nov 24 11:39:34 crc kubenswrapper[4678]: E1124 11:39:34.264203 4678 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73198913_cbf2_4796_bdc0_562acaedacaa.slice/crio-ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09.scope\": RecentStats: unable to find data in memory cache]" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.359239 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.542972 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9f00-47a2-4006-b096-0b7c23b03c38-run-httpd\") pod \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.543579 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-scripts\") pod \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.544008 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsm48\" (UniqueName: \"kubernetes.io/projected/bd8e9f00-47a2-4006-b096-0b7c23b03c38-kube-api-access-gsm48\") pod \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.544062 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9f00-47a2-4006-b096-0b7c23b03c38-log-httpd\") pod \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.544153 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-config-data\") pod \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.544236 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-sg-core-conf-yaml\") pod \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.544489 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-combined-ca-bundle\") pod \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\" (UID: \"bd8e9f00-47a2-4006-b096-0b7c23b03c38\") " Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.546010 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd8e9f00-47a2-4006-b096-0b7c23b03c38-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bd8e9f00-47a2-4006-b096-0b7c23b03c38" (UID: "bd8e9f00-47a2-4006-b096-0b7c23b03c38"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.546322 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd8e9f00-47a2-4006-b096-0b7c23b03c38-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bd8e9f00-47a2-4006-b096-0b7c23b03c38" (UID: "bd8e9f00-47a2-4006-b096-0b7c23b03c38"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.553372 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-scripts" (OuterVolumeSpecName: "scripts") pod "bd8e9f00-47a2-4006-b096-0b7c23b03c38" (UID: "bd8e9f00-47a2-4006-b096-0b7c23b03c38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.554110 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd8e9f00-47a2-4006-b096-0b7c23b03c38-kube-api-access-gsm48" (OuterVolumeSpecName: "kube-api-access-gsm48") pod "bd8e9f00-47a2-4006-b096-0b7c23b03c38" (UID: "bd8e9f00-47a2-4006-b096-0b7c23b03c38"). InnerVolumeSpecName "kube-api-access-gsm48". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.600624 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bd8e9f00-47a2-4006-b096-0b7c23b03c38" (UID: "bd8e9f00-47a2-4006-b096-0b7c23b03c38"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.635001 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.651662 4678 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9f00-47a2-4006-b096-0b7c23b03c38-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.651706 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.651715 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsm48\" (UniqueName: \"kubernetes.io/projected/bd8e9f00-47a2-4006-b096-0b7c23b03c38-kube-api-access-gsm48\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.651726 4678 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9f00-47a2-4006-b096-0b7c23b03c38-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.651739 4678 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.659593 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd8e9f00-47a2-4006-b096-0b7c23b03c38" (UID: "bd8e9f00-47a2-4006-b096-0b7c23b03c38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.737784 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-config-data" (OuterVolumeSpecName: "config-data") pod "bd8e9f00-47a2-4006-b096-0b7c23b03c38" (UID: "bd8e9f00-47a2-4006-b096-0b7c23b03c38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.752649 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73198913-cbf2-4796-bdc0-562acaedacaa-combined-ca-bundle\") pod \"73198913-cbf2-4796-bdc0-562acaedacaa\" (UID: \"73198913-cbf2-4796-bdc0-562acaedacaa\") " Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.752722 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73198913-cbf2-4796-bdc0-562acaedacaa-config-data\") pod \"73198913-cbf2-4796-bdc0-562acaedacaa\" (UID: \"73198913-cbf2-4796-bdc0-562acaedacaa\") " Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.752814 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73198913-cbf2-4796-bdc0-562acaedacaa-logs\") pod \"73198913-cbf2-4796-bdc0-562acaedacaa\" (UID: \"73198913-cbf2-4796-bdc0-562acaedacaa\") " Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.753077 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jkmz\" (UniqueName: \"kubernetes.io/projected/73198913-cbf2-4796-bdc0-562acaedacaa-kube-api-access-4jkmz\") pod \"73198913-cbf2-4796-bdc0-562acaedacaa\" (UID: \"73198913-cbf2-4796-bdc0-562acaedacaa\") " Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.753572 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.753591 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd8e9f00-47a2-4006-b096-0b7c23b03c38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.753623 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73198913-cbf2-4796-bdc0-562acaedacaa-logs" (OuterVolumeSpecName: "logs") pod "73198913-cbf2-4796-bdc0-562acaedacaa" (UID: "73198913-cbf2-4796-bdc0-562acaedacaa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.757508 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73198913-cbf2-4796-bdc0-562acaedacaa-kube-api-access-4jkmz" (OuterVolumeSpecName: "kube-api-access-4jkmz") pod "73198913-cbf2-4796-bdc0-562acaedacaa" (UID: "73198913-cbf2-4796-bdc0-562acaedacaa"). InnerVolumeSpecName "kube-api-access-4jkmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.785422 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73198913-cbf2-4796-bdc0-562acaedacaa-config-data" (OuterVolumeSpecName: "config-data") pod "73198913-cbf2-4796-bdc0-562acaedacaa" (UID: "73198913-cbf2-4796-bdc0-562acaedacaa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.795300 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73198913-cbf2-4796-bdc0-562acaedacaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73198913-cbf2-4796-bdc0-562acaedacaa" (UID: "73198913-cbf2-4796-bdc0-562acaedacaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.856088 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jkmz\" (UniqueName: \"kubernetes.io/projected/73198913-cbf2-4796-bdc0-562acaedacaa-kube-api-access-4jkmz\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.856134 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73198913-cbf2-4796-bdc0-562acaedacaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.856148 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73198913-cbf2-4796-bdc0-562acaedacaa-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:34 crc kubenswrapper[4678]: I1124 11:39:34.856159 4678 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73198913-cbf2-4796-bdc0-562acaedacaa-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.024169 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd8e9f00-47a2-4006-b096-0b7c23b03c38","Type":"ContainerDied","Data":"711ab7853f5e51bf26ee6fab1cd0d344ee0123bf131d854e10798bed7ad89bac"} Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.024222 4678 scope.go:117] "RemoveContainer" containerID="98afb4097603da7fbc52f15c314b815e59fe4361df6cf2500f3c65c89dd7104b" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.024391 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.034028 4678 generic.go:334] "Generic (PLEG): container finished" podID="73198913-cbf2-4796-bdc0-562acaedacaa" containerID="ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09" exitCode=0 Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.034063 4678 generic.go:334] "Generic (PLEG): container finished" podID="73198913-cbf2-4796-bdc0-562acaedacaa" containerID="69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e" exitCode=143 Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.034149 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"73198913-cbf2-4796-bdc0-562acaedacaa","Type":"ContainerDied","Data":"ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09"} Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.034196 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"73198913-cbf2-4796-bdc0-562acaedacaa","Type":"ContainerDied","Data":"69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e"} Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.034211 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"73198913-cbf2-4796-bdc0-562acaedacaa","Type":"ContainerDied","Data":"7cedcd30c745739eef42fb7b4b9e5e5f85a09669f9d6dd1ac6985ac19f6acdcb"} Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.034296 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.066940 4678 scope.go:117] "RemoveContainer" containerID="d70ea5270fe6cf8b1e5c8fcb2fab7ccf8638b0b8d700c07fd9c4690f91d3d482" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.095766 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.132738 4678 scope.go:117] "RemoveContainer" containerID="0726975dfe072c63b563a3cb1d5d33fb13c610236e60235f131c573e336d4ed4" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.146692 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.168025 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:35 crc kubenswrapper[4678]: E1124 11:39:35.168794 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerName="ceilometer-notification-agent" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.168813 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerName="ceilometer-notification-agent" Nov 24 11:39:35 crc kubenswrapper[4678]: E1124 11:39:35.168825 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerName="sg-core" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.168836 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerName="sg-core" Nov 24 11:39:35 crc kubenswrapper[4678]: E1124 11:39:35.168862 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73198913-cbf2-4796-bdc0-562acaedacaa" containerName="nova-metadata-metadata" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.168869 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="73198913-cbf2-4796-bdc0-562acaedacaa" containerName="nova-metadata-metadata" Nov 24 11:39:35 crc kubenswrapper[4678]: E1124 11:39:35.168890 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerName="ceilometer-central-agent" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.168896 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerName="ceilometer-central-agent" Nov 24 11:39:35 crc kubenswrapper[4678]: E1124 11:39:35.168909 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerName="proxy-httpd" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.168916 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerName="proxy-httpd" Nov 24 11:39:35 crc kubenswrapper[4678]: E1124 11:39:35.168954 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73198913-cbf2-4796-bdc0-562acaedacaa" containerName="nova-metadata-log" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.168961 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="73198913-cbf2-4796-bdc0-562acaedacaa" containerName="nova-metadata-log" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.169207 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerName="ceilometer-notification-agent" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.169221 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerName="sg-core" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.169235 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerName="ceilometer-central-agent" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.169243 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="73198913-cbf2-4796-bdc0-562acaedacaa" containerName="nova-metadata-log" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.169259 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="73198913-cbf2-4796-bdc0-562acaedacaa" containerName="nova-metadata-metadata" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.169270 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" containerName="proxy-httpd" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.171899 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.175715 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.175736 4678 scope.go:117] "RemoveContainer" containerID="19124e20dfebdcd3eef63e203f88f495af0a4e394e869a8922af73d3ed7afeb6" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.175935 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.182737 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.198853 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.211541 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.213195 4678 scope.go:117] "RemoveContainer" containerID="ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.227663 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.232519 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.236072 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.236080 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.241486 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.265064 4678 scope.go:117] "RemoveContainer" containerID="69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.275194 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68a77cae-074a-4561-9b14-16b07e793d63-run-httpd\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.275253 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s6qj\" (UniqueName: \"kubernetes.io/projected/68a77cae-074a-4561-9b14-16b07e793d63-kube-api-access-7s6qj\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.275469 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-scripts\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.275506 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.275553 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68a77cae-074a-4561-9b14-16b07e793d63-log-httpd\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.275776 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-config-data\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.275954 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.286252 4678 scope.go:117] "RemoveContainer" containerID="ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09" Nov 24 11:39:35 crc kubenswrapper[4678]: E1124 11:39:35.287034 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09\": container with ID starting with ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09 not found: ID does not exist" containerID="ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.287082 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09"} err="failed to get container status \"ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09\": rpc error: code = NotFound desc = could not find container \"ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09\": container with ID starting with ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09 not found: ID does not exist" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.287112 4678 scope.go:117] "RemoveContainer" containerID="69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e" Nov 24 11:39:35 crc kubenswrapper[4678]: E1124 11:39:35.287596 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e\": container with ID starting with 69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e not found: ID does not exist" containerID="69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.287629 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e"} err="failed to get container status \"69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e\": rpc error: code = NotFound desc = could not find container \"69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e\": container with ID starting with 69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e not found: ID does not exist" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.287647 4678 scope.go:117] "RemoveContainer" containerID="ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.288275 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09"} err="failed to get container status \"ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09\": rpc error: code = NotFound desc = could not find container \"ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09\": container with ID starting with ff74f9bb7a8447e96e203829e3ac390e94fcb51291fe97c7a9c1598719c6fa09 not found: ID does not exist" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.288303 4678 scope.go:117] "RemoveContainer" containerID="69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.288715 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e"} err="failed to get container status \"69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e\": rpc error: code = NotFound desc = could not find container \"69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e\": container with ID starting with 69d5a4124e1ddcc30296e1c5abae1da9f1d0a75d12a91c683e59aad311e9f13e not found: ID does not exist" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.377853 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68a77cae-074a-4561-9b14-16b07e793d63-run-httpd\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.377903 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd7bm\" (UniqueName: \"kubernetes.io/projected/62bda2f3-60c7-4553-ad90-96c31f74b8b4-kube-api-access-zd7bm\") pod \"nova-metadata-0\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.377930 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s6qj\" (UniqueName: \"kubernetes.io/projected/68a77cae-074a-4561-9b14-16b07e793d63-kube-api-access-7s6qj\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.377979 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.378010 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-scripts\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.378027 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.378047 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68a77cae-074a-4561-9b14-16b07e793d63-log-httpd\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.378077 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.378108 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-config-data\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.378140 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.378159 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-config-data\") pod \"nova-metadata-0\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.378233 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62bda2f3-60c7-4553-ad90-96c31f74b8b4-logs\") pod \"nova-metadata-0\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.378339 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68a77cae-074a-4561-9b14-16b07e793d63-run-httpd\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.378659 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68a77cae-074a-4561-9b14-16b07e793d63-log-httpd\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.384457 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-config-data\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.384897 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-scripts\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.385225 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.386741 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.395574 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s6qj\" (UniqueName: \"kubernetes.io/projected/68a77cae-074a-4561-9b14-16b07e793d63-kube-api-access-7s6qj\") pod \"ceilometer-0\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.480165 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62bda2f3-60c7-4553-ad90-96c31f74b8b4-logs\") pod \"nova-metadata-0\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.480242 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd7bm\" (UniqueName: \"kubernetes.io/projected/62bda2f3-60c7-4553-ad90-96c31f74b8b4-kube-api-access-zd7bm\") pod \"nova-metadata-0\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.480302 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.480356 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.480409 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-config-data\") pod \"nova-metadata-0\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.481228 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62bda2f3-60c7-4553-ad90-96c31f74b8b4-logs\") pod \"nova-metadata-0\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.484738 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-config-data\") pod \"nova-metadata-0\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.485048 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.490365 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.503317 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd7bm\" (UniqueName: \"kubernetes.io/projected/62bda2f3-60c7-4553-ad90-96c31f74b8b4-kube-api-access-zd7bm\") pod \"nova-metadata-0\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.507592 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.563000 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.909860 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73198913-cbf2-4796-bdc0-562acaedacaa" path="/var/lib/kubelet/pods/73198913-cbf2-4796-bdc0-562acaedacaa/volumes" Nov 24 11:39:35 crc kubenswrapper[4678]: I1124 11:39:35.910926 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd8e9f00-47a2-4006-b096-0b7c23b03c38" path="/var/lib/kubelet/pods/bd8e9f00-47a2-4006-b096-0b7c23b03c38/volumes" Nov 24 11:39:36 crc kubenswrapper[4678]: I1124 11:39:36.098172 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:39:36 crc kubenswrapper[4678]: W1124 11:39:36.105966 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62bda2f3_60c7_4553_ad90_96c31f74b8b4.slice/crio-5d718c2fa92c7e850098d1be23a4a25cbd4666b14241466ec8bdcac665e94c2d WatchSource:0}: Error finding container 5d718c2fa92c7e850098d1be23a4a25cbd4666b14241466ec8bdcac665e94c2d: Status 404 returned error can't find the container with id 5d718c2fa92c7e850098d1be23a4a25cbd4666b14241466ec8bdcac665e94c2d Nov 24 11:39:36 crc kubenswrapper[4678]: I1124 11:39:36.108422 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:39:37 crc kubenswrapper[4678]: I1124 11:39:37.068252 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62bda2f3-60c7-4553-ad90-96c31f74b8b4","Type":"ContainerStarted","Data":"f5e9ebfc4b480530eb2569c0cc05295590612c43fc2d96ac7551171656a562b8"} Nov 24 11:39:37 crc kubenswrapper[4678]: I1124 11:39:37.069283 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62bda2f3-60c7-4553-ad90-96c31f74b8b4","Type":"ContainerStarted","Data":"15fa0895b3c9b1b42d5815f25cc2bc3b88206be4d5c8f739f5f0d333eb0c6186"} Nov 24 11:39:37 crc kubenswrapper[4678]: I1124 11:39:37.069392 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62bda2f3-60c7-4553-ad90-96c31f74b8b4","Type":"ContainerStarted","Data":"5d718c2fa92c7e850098d1be23a4a25cbd4666b14241466ec8bdcac665e94c2d"} Nov 24 11:39:37 crc kubenswrapper[4678]: I1124 11:39:37.071317 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68a77cae-074a-4561-9b14-16b07e793d63","Type":"ContainerStarted","Data":"30aa677da80e841cfdebe3eaad25c79f64b609b257b8c1f0771bd0a25ec932b9"} Nov 24 11:39:37 crc kubenswrapper[4678]: I1124 11:39:37.071344 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68a77cae-074a-4561-9b14-16b07e793d63","Type":"ContainerStarted","Data":"798032cd712adbdc6ab7f5216a82ae426965d4abe2992c0f6d1549e9cea3e4d4"} Nov 24 11:39:37 crc kubenswrapper[4678]: I1124 11:39:37.074198 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 11:39:37 crc kubenswrapper[4678]: I1124 11:39:37.074242 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 11:39:37 crc kubenswrapper[4678]: I1124 11:39:37.089740 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.089715067 podStartE2EDuration="2.089715067s" podCreationTimestamp="2025-11-24 11:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:39:37.083590784 +0000 UTC m=+1388.014650423" watchObservedRunningTime="2025-11-24 11:39:37.089715067 +0000 UTC m=+1388.020774706" Nov 24 11:39:37 crc kubenswrapper[4678]: I1124 11:39:37.111427 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:39:37 crc kubenswrapper[4678]: I1124 11:39:37.116888 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 11:39:37 crc kubenswrapper[4678]: I1124 11:39:37.254506 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:39:37 crc kubenswrapper[4678]: I1124 11:39:37.254875 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:39:37 crc kubenswrapper[4678]: I1124 11:39:37.576869 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:39:37 crc kubenswrapper[4678]: I1124 11:39:37.635039 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-2w5tz"] Nov 24 11:39:37 crc kubenswrapper[4678]: I1124 11:39:37.637204 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" podUID="a189e45a-e15e-4a3b-b5de-3f0608b38f13" containerName="dnsmasq-dns" containerID="cri-o://e51f4ee7badcb61fb2fceeba9d1b3070f5a40793b470b1485a309d97a7e5f3bb" gracePeriod=10 Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.047712 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-cw5kl"] Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.049647 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-cw5kl" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.074397 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-cw5kl"] Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.138019 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68a77cae-074a-4561-9b14-16b07e793d63","Type":"ContainerStarted","Data":"185e539d51d81655454c6d0375198a7b0b250642ed353a2c5be01f3c12d55e7d"} Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.140318 4678 generic.go:334] "Generic (PLEG): container finished" podID="7c07a289-92fa-4945-a0d6-fa2524b0492f" containerID="b841e9b8aba88f1d070e9f7584f8b758b33955b65305660616f6b2d6ccffc3f1" exitCode=0 Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.140413 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-frnfg" event={"ID":"7c07a289-92fa-4945-a0d6-fa2524b0492f","Type":"ContainerDied","Data":"b841e9b8aba88f1d070e9f7584f8b758b33955b65305660616f6b2d6ccffc3f1"} Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.153466 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78946c61-158b-4f91-8717-cffd82196ea0-operator-scripts\") pod \"aodh-db-create-cw5kl\" (UID: \"78946c61-158b-4f91-8717-cffd82196ea0\") " pod="openstack/aodh-db-create-cw5kl" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.153553 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tszp6\" (UniqueName: \"kubernetes.io/projected/78946c61-158b-4f91-8717-cffd82196ea0-kube-api-access-tszp6\") pod \"aodh-db-create-cw5kl\" (UID: \"78946c61-158b-4f91-8717-cffd82196ea0\") " pod="openstack/aodh-db-create-cw5kl" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.156854 4678 generic.go:334] "Generic (PLEG): container finished" podID="a189e45a-e15e-4a3b-b5de-3f0608b38f13" containerID="e51f4ee7badcb61fb2fceeba9d1b3070f5a40793b470b1485a309d97a7e5f3bb" exitCode=0 Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.158649 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" event={"ID":"a189e45a-e15e-4a3b-b5de-3f0608b38f13","Type":"ContainerDied","Data":"e51f4ee7badcb61fb2fceeba9d1b3070f5a40793b470b1485a309d97a7e5f3bb"} Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.176383 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-4187-account-create-w4jn6"] Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.179359 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4187-account-create-w4jn6" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.182938 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.187975 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-4187-account-create-w4jn6"] Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.219917 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.255901 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78946c61-158b-4f91-8717-cffd82196ea0-operator-scripts\") pod \"aodh-db-create-cw5kl\" (UID: \"78946c61-158b-4f91-8717-cffd82196ea0\") " pod="openstack/aodh-db-create-cw5kl" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.255978 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tszp6\" (UniqueName: \"kubernetes.io/projected/78946c61-158b-4f91-8717-cffd82196ea0-kube-api-access-tszp6\") pod \"aodh-db-create-cw5kl\" (UID: \"78946c61-158b-4f91-8717-cffd82196ea0\") " pod="openstack/aodh-db-create-cw5kl" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.257455 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78946c61-158b-4f91-8717-cffd82196ea0-operator-scripts\") pod \"aodh-db-create-cw5kl\" (UID: \"78946c61-158b-4f91-8717-cffd82196ea0\") " pod="openstack/aodh-db-create-cw5kl" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.276623 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tszp6\" (UniqueName: \"kubernetes.io/projected/78946c61-158b-4f91-8717-cffd82196ea0-kube-api-access-tszp6\") pod \"aodh-db-create-cw5kl\" (UID: \"78946c61-158b-4f91-8717-cffd82196ea0\") " pod="openstack/aodh-db-create-cw5kl" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.339997 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2e7def9d-7b7b-4ce9-a8eb-e5736e671100" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.234:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.340316 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2e7def9d-7b7b-4ce9-a8eb-e5736e671100" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.234:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.357826 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdnl9\" (UniqueName: \"kubernetes.io/projected/6e0d74d3-3a32-4293-a8a7-53b6f541cbdd-kube-api-access-cdnl9\") pod \"aodh-4187-account-create-w4jn6\" (UID: \"6e0d74d3-3a32-4293-a8a7-53b6f541cbdd\") " pod="openstack/aodh-4187-account-create-w4jn6" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.358253 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e0d74d3-3a32-4293-a8a7-53b6f541cbdd-operator-scripts\") pod \"aodh-4187-account-create-w4jn6\" (UID: \"6e0d74d3-3a32-4293-a8a7-53b6f541cbdd\") " pod="openstack/aodh-4187-account-create-w4jn6" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.396533 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-cw5kl" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.461729 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e0d74d3-3a32-4293-a8a7-53b6f541cbdd-operator-scripts\") pod \"aodh-4187-account-create-w4jn6\" (UID: \"6e0d74d3-3a32-4293-a8a7-53b6f541cbdd\") " pod="openstack/aodh-4187-account-create-w4jn6" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.461876 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdnl9\" (UniqueName: \"kubernetes.io/projected/6e0d74d3-3a32-4293-a8a7-53b6f541cbdd-kube-api-access-cdnl9\") pod \"aodh-4187-account-create-w4jn6\" (UID: \"6e0d74d3-3a32-4293-a8a7-53b6f541cbdd\") " pod="openstack/aodh-4187-account-create-w4jn6" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.464409 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e0d74d3-3a32-4293-a8a7-53b6f541cbdd-operator-scripts\") pod \"aodh-4187-account-create-w4jn6\" (UID: \"6e0d74d3-3a32-4293-a8a7-53b6f541cbdd\") " pod="openstack/aodh-4187-account-create-w4jn6" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.484505 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdnl9\" (UniqueName: \"kubernetes.io/projected/6e0d74d3-3a32-4293-a8a7-53b6f541cbdd-kube-api-access-cdnl9\") pod \"aodh-4187-account-create-w4jn6\" (UID: \"6e0d74d3-3a32-4293-a8a7-53b6f541cbdd\") " pod="openstack/aodh-4187-account-create-w4jn6" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.496397 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.685148 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-dns-swift-storage-0\") pod \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.685226 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-ovsdbserver-nb\") pod \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.685265 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-ovsdbserver-sb\") pod \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.685336 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-dns-svc\") pod \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.685400 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvp9x\" (UniqueName: \"kubernetes.io/projected/a189e45a-e15e-4a3b-b5de-3f0608b38f13-kube-api-access-tvp9x\") pod \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.685491 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-config\") pod \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\" (UID: \"a189e45a-e15e-4a3b-b5de-3f0608b38f13\") " Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.708435 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a189e45a-e15e-4a3b-b5de-3f0608b38f13-kube-api-access-tvp9x" (OuterVolumeSpecName: "kube-api-access-tvp9x") pod "a189e45a-e15e-4a3b-b5de-3f0608b38f13" (UID: "a189e45a-e15e-4a3b-b5de-3f0608b38f13"). InnerVolumeSpecName "kube-api-access-tvp9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.763940 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4187-account-create-w4jn6" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.772644 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a189e45a-e15e-4a3b-b5de-3f0608b38f13" (UID: "a189e45a-e15e-4a3b-b5de-3f0608b38f13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.781070 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-config" (OuterVolumeSpecName: "config") pod "a189e45a-e15e-4a3b-b5de-3f0608b38f13" (UID: "a189e45a-e15e-4a3b-b5de-3f0608b38f13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.795992 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvp9x\" (UniqueName: \"kubernetes.io/projected/a189e45a-e15e-4a3b-b5de-3f0608b38f13-kube-api-access-tvp9x\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.796080 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.796095 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.805362 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a189e45a-e15e-4a3b-b5de-3f0608b38f13" (UID: "a189e45a-e15e-4a3b-b5de-3f0608b38f13"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.822260 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a189e45a-e15e-4a3b-b5de-3f0608b38f13" (UID: "a189e45a-e15e-4a3b-b5de-3f0608b38f13"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.849177 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a189e45a-e15e-4a3b-b5de-3f0608b38f13" (UID: "a189e45a-e15e-4a3b-b5de-3f0608b38f13"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.900463 4678 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.900496 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.900507 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a189e45a-e15e-4a3b-b5de-3f0608b38f13-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:38 crc kubenswrapper[4678]: I1124 11:39:38.903038 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-cw5kl"] Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.179737 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-cw5kl" event={"ID":"78946c61-158b-4f91-8717-cffd82196ea0","Type":"ContainerStarted","Data":"24cdb485621eb9e22ea5b3a8bbef6fd71b4f914a3550122c642af1706945bcac"} Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.180285 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-cw5kl" event={"ID":"78946c61-158b-4f91-8717-cffd82196ea0","Type":"ContainerStarted","Data":"02d0202130e00430378fa1ed6bf0c07bacf487dd8cd00b02caea8515fed622b0"} Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.185432 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68a77cae-074a-4561-9b14-16b07e793d63","Type":"ContainerStarted","Data":"48a2df694252fe0a56a19ec06e5c54ab03e790211539821607f07a564d2d79f1"} Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.196312 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-cw5kl" podStartSLOduration=2.196285413 podStartE2EDuration="2.196285413s" podCreationTimestamp="2025-11-24 11:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:39:39.195348898 +0000 UTC m=+1390.126408547" watchObservedRunningTime="2025-11-24 11:39:39.196285413 +0000 UTC m=+1390.127345052" Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.217888 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.218018 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-2w5tz" event={"ID":"a189e45a-e15e-4a3b-b5de-3f0608b38f13","Type":"ContainerDied","Data":"21141659156a58c7740e8b0b782ee7b44494bf611cc7cc59607415796eb74620"} Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.218093 4678 scope.go:117] "RemoveContainer" containerID="e51f4ee7badcb61fb2fceeba9d1b3070f5a40793b470b1485a309d97a7e5f3bb" Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.304743 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-4187-account-create-w4jn6"] Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.336221 4678 scope.go:117] "RemoveContainer" containerID="d2a40c8b39e319dea27948b1f78aad703eefcbc8c74d81eeae96e9c02492fadb" Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.406000 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-2w5tz"] Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.420504 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-2w5tz"] Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.748628 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-frnfg" Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.815467 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-scripts\") pod \"7c07a289-92fa-4945-a0d6-fa2524b0492f\" (UID: \"7c07a289-92fa-4945-a0d6-fa2524b0492f\") " Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.815520 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-combined-ca-bundle\") pod \"7c07a289-92fa-4945-a0d6-fa2524b0492f\" (UID: \"7c07a289-92fa-4945-a0d6-fa2524b0492f\") " Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.815541 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-config-data\") pod \"7c07a289-92fa-4945-a0d6-fa2524b0492f\" (UID: \"7c07a289-92fa-4945-a0d6-fa2524b0492f\") " Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.815773 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6qjg\" (UniqueName: \"kubernetes.io/projected/7c07a289-92fa-4945-a0d6-fa2524b0492f-kube-api-access-h6qjg\") pod \"7c07a289-92fa-4945-a0d6-fa2524b0492f\" (UID: \"7c07a289-92fa-4945-a0d6-fa2524b0492f\") " Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.823921 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c07a289-92fa-4945-a0d6-fa2524b0492f-kube-api-access-h6qjg" (OuterVolumeSpecName: "kube-api-access-h6qjg") pod "7c07a289-92fa-4945-a0d6-fa2524b0492f" (UID: "7c07a289-92fa-4945-a0d6-fa2524b0492f"). InnerVolumeSpecName "kube-api-access-h6qjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.825819 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-scripts" (OuterVolumeSpecName: "scripts") pod "7c07a289-92fa-4945-a0d6-fa2524b0492f" (UID: "7c07a289-92fa-4945-a0d6-fa2524b0492f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.873971 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-config-data" (OuterVolumeSpecName: "config-data") pod "7c07a289-92fa-4945-a0d6-fa2524b0492f" (UID: "7c07a289-92fa-4945-a0d6-fa2524b0492f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.882300 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c07a289-92fa-4945-a0d6-fa2524b0492f" (UID: "7c07a289-92fa-4945-a0d6-fa2524b0492f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.910535 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a189e45a-e15e-4a3b-b5de-3f0608b38f13" path="/var/lib/kubelet/pods/a189e45a-e15e-4a3b-b5de-3f0608b38f13/volumes" Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.917734 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6qjg\" (UniqueName: \"kubernetes.io/projected/7c07a289-92fa-4945-a0d6-fa2524b0492f-kube-api-access-h6qjg\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.917990 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.918016 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:39 crc kubenswrapper[4678]: I1124 11:39:39.918028 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c07a289-92fa-4945-a0d6-fa2524b0492f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.246271 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-frnfg" event={"ID":"7c07a289-92fa-4945-a0d6-fa2524b0492f","Type":"ContainerDied","Data":"bd3fdc4010ba8cdd8a1c8c90449c5103cff4906a677e289959bfffbea588cc21"} Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.247525 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd3fdc4010ba8cdd8a1c8c90449c5103cff4906a677e289959bfffbea588cc21" Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.247627 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-frnfg" Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.270726 4678 generic.go:334] "Generic (PLEG): container finished" podID="6e0d74d3-3a32-4293-a8a7-53b6f541cbdd" containerID="96c297a54196c1f5d7d6fa0ed71e695c0f34d0a588925a2d19693463f1854bb3" exitCode=0 Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.271226 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4187-account-create-w4jn6" event={"ID":"6e0d74d3-3a32-4293-a8a7-53b6f541cbdd","Type":"ContainerDied","Data":"96c297a54196c1f5d7d6fa0ed71e695c0f34d0a588925a2d19693463f1854bb3"} Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.271267 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4187-account-create-w4jn6" event={"ID":"6e0d74d3-3a32-4293-a8a7-53b6f541cbdd","Type":"ContainerStarted","Data":"dcb156b09e5b0cef3f1ae1564e8430e8e5a3c3037881560101cf8358e38602c2"} Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.280211 4678 generic.go:334] "Generic (PLEG): container finished" podID="78946c61-158b-4f91-8717-cffd82196ea0" containerID="24cdb485621eb9e22ea5b3a8bbef6fd71b4f914a3550122c642af1706945bcac" exitCode=0 Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.280271 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-cw5kl" event={"ID":"78946c61-158b-4f91-8717-cffd82196ea0","Type":"ContainerDied","Data":"24cdb485621eb9e22ea5b3a8bbef6fd71b4f914a3550122c642af1706945bcac"} Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.356528 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.356939 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2e7def9d-7b7b-4ce9-a8eb-e5736e671100" containerName="nova-api-log" containerID="cri-o://0b27460ca5f4c19bec1f79417c04b4806b15761578c31189d2483df4b4810c52" gracePeriod=30 Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.357293 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2e7def9d-7b7b-4ce9-a8eb-e5736e671100" containerName="nova-api-api" containerID="cri-o://4b23073c074a48f6e9e900563d1d17dc3c8f96e95a42fa5359bf4b1e63da140e" gracePeriod=30 Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.468752 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.469106 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f881b8b7-c793-4eea-8a59-5095344fce59" containerName="nova-scheduler-scheduler" containerID="cri-o://9006cdc8b802f5a4126dd2f5eb2f548e98d008124214e5f8647700e6c8cdda53" gracePeriod=30 Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.495862 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.496082 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="62bda2f3-60c7-4553-ad90-96c31f74b8b4" containerName="nova-metadata-log" containerID="cri-o://15fa0895b3c9b1b42d5815f25cc2bc3b88206be4d5c8f739f5f0d333eb0c6186" gracePeriod=30 Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.496563 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="62bda2f3-60c7-4553-ad90-96c31f74b8b4" containerName="nova-metadata-metadata" containerID="cri-o://f5e9ebfc4b480530eb2569c0cc05295590612c43fc2d96ac7551171656a562b8" gracePeriod=30 Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.565824 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 11:39:40 crc kubenswrapper[4678]: I1124 11:39:40.565879 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.299819 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68a77cae-074a-4561-9b14-16b07e793d63","Type":"ContainerStarted","Data":"6162089d75b4017d7c20697be115834a83c74a88213615693a8f2ac904c5a58d"} Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.300226 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.301931 4678 generic.go:334] "Generic (PLEG): container finished" podID="2e7def9d-7b7b-4ce9-a8eb-e5736e671100" containerID="0b27460ca5f4c19bec1f79417c04b4806b15761578c31189d2483df4b4810c52" exitCode=143 Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.302124 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e7def9d-7b7b-4ce9-a8eb-e5736e671100","Type":"ContainerDied","Data":"0b27460ca5f4c19bec1f79417c04b4806b15761578c31189d2483df4b4810c52"} Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.308605 4678 generic.go:334] "Generic (PLEG): container finished" podID="62bda2f3-60c7-4553-ad90-96c31f74b8b4" containerID="f5e9ebfc4b480530eb2569c0cc05295590612c43fc2d96ac7551171656a562b8" exitCode=0 Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.308655 4678 generic.go:334] "Generic (PLEG): container finished" podID="62bda2f3-60c7-4553-ad90-96c31f74b8b4" containerID="15fa0895b3c9b1b42d5815f25cc2bc3b88206be4d5c8f739f5f0d333eb0c6186" exitCode=143 Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.308647 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62bda2f3-60c7-4553-ad90-96c31f74b8b4","Type":"ContainerDied","Data":"f5e9ebfc4b480530eb2569c0cc05295590612c43fc2d96ac7551171656a562b8"} Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.308863 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62bda2f3-60c7-4553-ad90-96c31f74b8b4","Type":"ContainerDied","Data":"15fa0895b3c9b1b42d5815f25cc2bc3b88206be4d5c8f739f5f0d333eb0c6186"} Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.474523 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.505152 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.466904185 podStartE2EDuration="6.505131318s" podCreationTimestamp="2025-11-24 11:39:35 +0000 UTC" firstStartedPulling="2025-11-24 11:39:36.108172138 +0000 UTC m=+1387.039231777" lastFinishedPulling="2025-11-24 11:39:40.146399271 +0000 UTC m=+1391.077458910" observedRunningTime="2025-11-24 11:39:41.325905715 +0000 UTC m=+1392.256965364" watchObservedRunningTime="2025-11-24 11:39:41.505131318 +0000 UTC m=+1392.436190957" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.578856 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-config-data\") pod \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.579081 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-nova-metadata-tls-certs\") pod \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.579217 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd7bm\" (UniqueName: \"kubernetes.io/projected/62bda2f3-60c7-4553-ad90-96c31f74b8b4-kube-api-access-zd7bm\") pod \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.579297 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62bda2f3-60c7-4553-ad90-96c31f74b8b4-logs\") pod \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.579323 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-combined-ca-bundle\") pod \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\" (UID: \"62bda2f3-60c7-4553-ad90-96c31f74b8b4\") " Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.580118 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62bda2f3-60c7-4553-ad90-96c31f74b8b4-logs" (OuterVolumeSpecName: "logs") pod "62bda2f3-60c7-4553-ad90-96c31f74b8b4" (UID: "62bda2f3-60c7-4553-ad90-96c31f74b8b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.593178 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62bda2f3-60c7-4553-ad90-96c31f74b8b4-kube-api-access-zd7bm" (OuterVolumeSpecName: "kube-api-access-zd7bm") pod "62bda2f3-60c7-4553-ad90-96c31f74b8b4" (UID: "62bda2f3-60c7-4553-ad90-96c31f74b8b4"). InnerVolumeSpecName "kube-api-access-zd7bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.647250 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-config-data" (OuterVolumeSpecName: "config-data") pod "62bda2f3-60c7-4553-ad90-96c31f74b8b4" (UID: "62bda2f3-60c7-4553-ad90-96c31f74b8b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.652059 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62bda2f3-60c7-4553-ad90-96c31f74b8b4" (UID: "62bda2f3-60c7-4553-ad90-96c31f74b8b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.684538 4678 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62bda2f3-60c7-4553-ad90-96c31f74b8b4-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.684577 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.684588 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.684599 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd7bm\" (UniqueName: \"kubernetes.io/projected/62bda2f3-60c7-4553-ad90-96c31f74b8b4-kube-api-access-zd7bm\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.812319 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-cw5kl" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.818055 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "62bda2f3-60c7-4553-ad90-96c31f74b8b4" (UID: "62bda2f3-60c7-4553-ad90-96c31f74b8b4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.862963 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4187-account-create-w4jn6" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.893712 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78946c61-158b-4f91-8717-cffd82196ea0-operator-scripts\") pod \"78946c61-158b-4f91-8717-cffd82196ea0\" (UID: \"78946c61-158b-4f91-8717-cffd82196ea0\") " Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.893877 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tszp6\" (UniqueName: \"kubernetes.io/projected/78946c61-158b-4f91-8717-cffd82196ea0-kube-api-access-tszp6\") pod \"78946c61-158b-4f91-8717-cffd82196ea0\" (UID: \"78946c61-158b-4f91-8717-cffd82196ea0\") " Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.894530 4678 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/62bda2f3-60c7-4553-ad90-96c31f74b8b4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.896887 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78946c61-158b-4f91-8717-cffd82196ea0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78946c61-158b-4f91-8717-cffd82196ea0" (UID: "78946c61-158b-4f91-8717-cffd82196ea0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:39:41 crc kubenswrapper[4678]: I1124 11:39:41.917963 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78946c61-158b-4f91-8717-cffd82196ea0-kube-api-access-tszp6" (OuterVolumeSpecName: "kube-api-access-tszp6") pod "78946c61-158b-4f91-8717-cffd82196ea0" (UID: "78946c61-158b-4f91-8717-cffd82196ea0"). InnerVolumeSpecName "kube-api-access-tszp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:41.996873 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e0d74d3-3a32-4293-a8a7-53b6f541cbdd-operator-scripts\") pod \"6e0d74d3-3a32-4293-a8a7-53b6f541cbdd\" (UID: \"6e0d74d3-3a32-4293-a8a7-53b6f541cbdd\") " Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:41.997019 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdnl9\" (UniqueName: \"kubernetes.io/projected/6e0d74d3-3a32-4293-a8a7-53b6f541cbdd-kube-api-access-cdnl9\") pod \"6e0d74d3-3a32-4293-a8a7-53b6f541cbdd\" (UID: \"6e0d74d3-3a32-4293-a8a7-53b6f541cbdd\") " Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:41.997797 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78946c61-158b-4f91-8717-cffd82196ea0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:41.997811 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tszp6\" (UniqueName: \"kubernetes.io/projected/78946c61-158b-4f91-8717-cffd82196ea0-kube-api-access-tszp6\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:41.998473 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e0d74d3-3a32-4293-a8a7-53b6f541cbdd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e0d74d3-3a32-4293-a8a7-53b6f541cbdd" (UID: "6e0d74d3-3a32-4293-a8a7-53b6f541cbdd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.000224 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e0d74d3-3a32-4293-a8a7-53b6f541cbdd-kube-api-access-cdnl9" (OuterVolumeSpecName: "kube-api-access-cdnl9") pod "6e0d74d3-3a32-4293-a8a7-53b6f541cbdd" (UID: "6e0d74d3-3a32-4293-a8a7-53b6f541cbdd"). InnerVolumeSpecName "kube-api-access-cdnl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:42 crc kubenswrapper[4678]: E1124 11:39:42.075881 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9006cdc8b802f5a4126dd2f5eb2f548e98d008124214e5f8647700e6c8cdda53" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 11:39:42 crc kubenswrapper[4678]: E1124 11:39:42.077125 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9006cdc8b802f5a4126dd2f5eb2f548e98d008124214e5f8647700e6c8cdda53" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 11:39:42 crc kubenswrapper[4678]: E1124 11:39:42.078116 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9006cdc8b802f5a4126dd2f5eb2f548e98d008124214e5f8647700e6c8cdda53" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 11:39:42 crc kubenswrapper[4678]: E1124 11:39:42.078150 4678 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f881b8b7-c793-4eea-8a59-5095344fce59" containerName="nova-scheduler-scheduler" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.100172 4678 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e0d74d3-3a32-4293-a8a7-53b6f541cbdd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.100211 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdnl9\" (UniqueName: \"kubernetes.io/projected/6e0d74d3-3a32-4293-a8a7-53b6f541cbdd-kube-api-access-cdnl9\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.322127 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-cw5kl" event={"ID":"78946c61-158b-4f91-8717-cffd82196ea0","Type":"ContainerDied","Data":"02d0202130e00430378fa1ed6bf0c07bacf487dd8cd00b02caea8515fed622b0"} Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.322172 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02d0202130e00430378fa1ed6bf0c07bacf487dd8cd00b02caea8515fed622b0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.322267 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-cw5kl" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.324613 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4187-account-create-w4jn6" event={"ID":"6e0d74d3-3a32-4293-a8a7-53b6f541cbdd","Type":"ContainerDied","Data":"dcb156b09e5b0cef3f1ae1564e8430e8e5a3c3037881560101cf8358e38602c2"} Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.324651 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcb156b09e5b0cef3f1ae1564e8430e8e5a3c3037881560101cf8358e38602c2" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.324732 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4187-account-create-w4jn6" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.336256 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62bda2f3-60c7-4553-ad90-96c31f74b8b4","Type":"ContainerDied","Data":"5d718c2fa92c7e850098d1be23a4a25cbd4666b14241466ec8bdcac665e94c2d"} Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.336306 4678 scope.go:117] "RemoveContainer" containerID="f5e9ebfc4b480530eb2569c0cc05295590612c43fc2d96ac7551171656a562b8" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.336315 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.370325 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.379784 4678 scope.go:117] "RemoveContainer" containerID="15fa0895b3c9b1b42d5815f25cc2bc3b88206be4d5c8f739f5f0d333eb0c6186" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.392308 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.406621 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:39:42 crc kubenswrapper[4678]: E1124 11:39:42.415896 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e0d74d3-3a32-4293-a8a7-53b6f541cbdd" containerName="mariadb-account-create" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.415950 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e0d74d3-3a32-4293-a8a7-53b6f541cbdd" containerName="mariadb-account-create" Nov 24 11:39:42 crc kubenswrapper[4678]: E1124 11:39:42.415966 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62bda2f3-60c7-4553-ad90-96c31f74b8b4" containerName="nova-metadata-metadata" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.415975 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="62bda2f3-60c7-4553-ad90-96c31f74b8b4" containerName="nova-metadata-metadata" Nov 24 11:39:42 crc kubenswrapper[4678]: E1124 11:39:42.416004 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78946c61-158b-4f91-8717-cffd82196ea0" containerName="mariadb-database-create" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.416011 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="78946c61-158b-4f91-8717-cffd82196ea0" containerName="mariadb-database-create" Nov 24 11:39:42 crc kubenswrapper[4678]: E1124 11:39:42.416046 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a189e45a-e15e-4a3b-b5de-3f0608b38f13" containerName="init" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.416052 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="a189e45a-e15e-4a3b-b5de-3f0608b38f13" containerName="init" Nov 24 11:39:42 crc kubenswrapper[4678]: E1124 11:39:42.416069 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c07a289-92fa-4945-a0d6-fa2524b0492f" containerName="nova-manage" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.416081 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c07a289-92fa-4945-a0d6-fa2524b0492f" containerName="nova-manage" Nov 24 11:39:42 crc kubenswrapper[4678]: E1124 11:39:42.416109 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a189e45a-e15e-4a3b-b5de-3f0608b38f13" containerName="dnsmasq-dns" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.416118 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="a189e45a-e15e-4a3b-b5de-3f0608b38f13" containerName="dnsmasq-dns" Nov 24 11:39:42 crc kubenswrapper[4678]: E1124 11:39:42.416138 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62bda2f3-60c7-4553-ad90-96c31f74b8b4" containerName="nova-metadata-log" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.416146 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="62bda2f3-60c7-4553-ad90-96c31f74b8b4" containerName="nova-metadata-log" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.416524 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="78946c61-158b-4f91-8717-cffd82196ea0" containerName="mariadb-database-create" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.416552 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="62bda2f3-60c7-4553-ad90-96c31f74b8b4" containerName="nova-metadata-log" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.416573 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="62bda2f3-60c7-4553-ad90-96c31f74b8b4" containerName="nova-metadata-metadata" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.416585 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e0d74d3-3a32-4293-a8a7-53b6f541cbdd" containerName="mariadb-account-create" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.416604 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="a189e45a-e15e-4a3b-b5de-3f0608b38f13" containerName="dnsmasq-dns" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.416616 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c07a289-92fa-4945-a0d6-fa2524b0492f" containerName="nova-manage" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.426557 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.426663 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.430080 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.430623 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.507706 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-config-data\") pod \"nova-metadata-0\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.507945 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.508066 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722acbe1-a292-43be-88ea-7759fb793035-logs\") pod \"nova-metadata-0\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.508284 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.508380 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khvn8\" (UniqueName: \"kubernetes.io/projected/722acbe1-a292-43be-88ea-7759fb793035-kube-api-access-khvn8\") pod \"nova-metadata-0\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.610851 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.610945 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722acbe1-a292-43be-88ea-7759fb793035-logs\") pod \"nova-metadata-0\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.611017 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.611057 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khvn8\" (UniqueName: \"kubernetes.io/projected/722acbe1-a292-43be-88ea-7759fb793035-kube-api-access-khvn8\") pod \"nova-metadata-0\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.611096 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-config-data\") pod \"nova-metadata-0\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.611454 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722acbe1-a292-43be-88ea-7759fb793035-logs\") pod \"nova-metadata-0\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.616394 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.616465 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-config-data\") pod \"nova-metadata-0\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.623478 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.629034 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khvn8\" (UniqueName: \"kubernetes.io/projected/722acbe1-a292-43be-88ea-7759fb793035-kube-api-access-khvn8\") pod \"nova-metadata-0\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " pod="openstack/nova-metadata-0" Nov 24 11:39:42 crc kubenswrapper[4678]: I1124 11:39:42.760032 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.267434 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.353715 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"722acbe1-a292-43be-88ea-7759fb793035","Type":"ContainerStarted","Data":"7d2784b4220e3b8c10608184b0e2151849ad7457144c6d3b7cbb0e8a419cace8"} Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.357100 4678 generic.go:334] "Generic (PLEG): container finished" podID="0a390c1f-e5b4-47a0-a9e8-a9979475fbab" containerID="b71e95ad1773190481a5e9b395e2aa833103646e5c626528943357a7946244db" exitCode=0 Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.357142 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mptk6" event={"ID":"0a390c1f-e5b4-47a0-a9e8-a9979475fbab","Type":"ContainerDied","Data":"b71e95ad1773190481a5e9b395e2aa833103646e5c626528943357a7946244db"} Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.456813 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-rzwfw"] Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.458816 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rzwfw" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.463530 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bwbmq" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.463532 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.464103 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.464814 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.470210 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-rzwfw"] Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.532841 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-config-data\") pod \"aodh-db-sync-rzwfw\" (UID: \"5410b784-9693-43c3-9f8a-43084f540dc6\") " pod="openstack/aodh-db-sync-rzwfw" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.532877 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-scripts\") pod \"aodh-db-sync-rzwfw\" (UID: \"5410b784-9693-43c3-9f8a-43084f540dc6\") " pod="openstack/aodh-db-sync-rzwfw" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.532908 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft9jw\" (UniqueName: \"kubernetes.io/projected/5410b784-9693-43c3-9f8a-43084f540dc6-kube-api-access-ft9jw\") pod \"aodh-db-sync-rzwfw\" (UID: \"5410b784-9693-43c3-9f8a-43084f540dc6\") " pod="openstack/aodh-db-sync-rzwfw" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.532968 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-combined-ca-bundle\") pod \"aodh-db-sync-rzwfw\" (UID: \"5410b784-9693-43c3-9f8a-43084f540dc6\") " pod="openstack/aodh-db-sync-rzwfw" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.635031 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-config-data\") pod \"aodh-db-sync-rzwfw\" (UID: \"5410b784-9693-43c3-9f8a-43084f540dc6\") " pod="openstack/aodh-db-sync-rzwfw" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.635096 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-scripts\") pod \"aodh-db-sync-rzwfw\" (UID: \"5410b784-9693-43c3-9f8a-43084f540dc6\") " pod="openstack/aodh-db-sync-rzwfw" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.635131 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft9jw\" (UniqueName: \"kubernetes.io/projected/5410b784-9693-43c3-9f8a-43084f540dc6-kube-api-access-ft9jw\") pod \"aodh-db-sync-rzwfw\" (UID: \"5410b784-9693-43c3-9f8a-43084f540dc6\") " pod="openstack/aodh-db-sync-rzwfw" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.635219 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-combined-ca-bundle\") pod \"aodh-db-sync-rzwfw\" (UID: \"5410b784-9693-43c3-9f8a-43084f540dc6\") " pod="openstack/aodh-db-sync-rzwfw" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.639042 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-combined-ca-bundle\") pod \"aodh-db-sync-rzwfw\" (UID: \"5410b784-9693-43c3-9f8a-43084f540dc6\") " pod="openstack/aodh-db-sync-rzwfw" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.639742 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-scripts\") pod \"aodh-db-sync-rzwfw\" (UID: \"5410b784-9693-43c3-9f8a-43084f540dc6\") " pod="openstack/aodh-db-sync-rzwfw" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.641189 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-config-data\") pod \"aodh-db-sync-rzwfw\" (UID: \"5410b784-9693-43c3-9f8a-43084f540dc6\") " pod="openstack/aodh-db-sync-rzwfw" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.653726 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft9jw\" (UniqueName: \"kubernetes.io/projected/5410b784-9693-43c3-9f8a-43084f540dc6-kube-api-access-ft9jw\") pod \"aodh-db-sync-rzwfw\" (UID: \"5410b784-9693-43c3-9f8a-43084f540dc6\") " pod="openstack/aodh-db-sync-rzwfw" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.782127 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rzwfw" Nov 24 11:39:43 crc kubenswrapper[4678]: I1124 11:39:43.910439 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62bda2f3-60c7-4553-ad90-96c31f74b8b4" path="/var/lib/kubelet/pods/62bda2f3-60c7-4553-ad90-96c31f74b8b4/volumes" Nov 24 11:39:44 crc kubenswrapper[4678]: I1124 11:39:44.257566 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-rzwfw"] Nov 24 11:39:44 crc kubenswrapper[4678]: I1124 11:39:44.369160 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rzwfw" event={"ID":"5410b784-9693-43c3-9f8a-43084f540dc6","Type":"ContainerStarted","Data":"8813944544d8e609870842206890aa30cb2bee1b9a46bf080ab02ceeafe7980a"} Nov 24 11:39:44 crc kubenswrapper[4678]: I1124 11:39:44.371252 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"722acbe1-a292-43be-88ea-7759fb793035","Type":"ContainerStarted","Data":"12fc6a9d37660edcf33bcb38bc03b8d9d4f67ad6e1eaa11c48bfea5ea0176935"} Nov 24 11:39:44 crc kubenswrapper[4678]: I1124 11:39:44.371277 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"722acbe1-a292-43be-88ea-7759fb793035","Type":"ContainerStarted","Data":"a78ce56c9bd708c6bdcd654307e5537ea93beda341951abedaee7286bdaa1c2c"} Nov 24 11:39:44 crc kubenswrapper[4678]: I1124 11:39:44.880334 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mptk6" Nov 24 11:39:44 crc kubenswrapper[4678]: I1124 11:39:44.896788 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.89676655 podStartE2EDuration="2.89676655s" podCreationTimestamp="2025-11-24 11:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:39:44.403196761 +0000 UTC m=+1395.334256400" watchObservedRunningTime="2025-11-24 11:39:44.89676655 +0000 UTC m=+1395.827826189" Nov 24 11:39:44 crc kubenswrapper[4678]: I1124 11:39:44.993317 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-scripts\") pod \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\" (UID: \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\") " Nov 24 11:39:44 crc kubenswrapper[4678]: I1124 11:39:44.993589 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjxtt\" (UniqueName: \"kubernetes.io/projected/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-kube-api-access-vjxtt\") pod \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\" (UID: \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\") " Nov 24 11:39:44 crc kubenswrapper[4678]: I1124 11:39:44.993648 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-config-data\") pod \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\" (UID: \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\") " Nov 24 11:39:44 crc kubenswrapper[4678]: I1124 11:39:44.993807 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-combined-ca-bundle\") pod \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\" (UID: \"0a390c1f-e5b4-47a0-a9e8-a9979475fbab\") " Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.013894 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-kube-api-access-vjxtt" (OuterVolumeSpecName: "kube-api-access-vjxtt") pod "0a390c1f-e5b4-47a0-a9e8-a9979475fbab" (UID: "0a390c1f-e5b4-47a0-a9e8-a9979475fbab"). InnerVolumeSpecName "kube-api-access-vjxtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.022802 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-scripts" (OuterVolumeSpecName: "scripts") pod "0a390c1f-e5b4-47a0-a9e8-a9979475fbab" (UID: "0a390c1f-e5b4-47a0-a9e8-a9979475fbab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.040198 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a390c1f-e5b4-47a0-a9e8-a9979475fbab" (UID: "0a390c1f-e5b4-47a0-a9e8-a9979475fbab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.056013 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-config-data" (OuterVolumeSpecName: "config-data") pod "0a390c1f-e5b4-47a0-a9e8-a9979475fbab" (UID: "0a390c1f-e5b4-47a0-a9e8-a9979475fbab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.095919 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjxtt\" (UniqueName: \"kubernetes.io/projected/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-kube-api-access-vjxtt\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.095955 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.095966 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.096010 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a390c1f-e5b4-47a0-a9e8-a9979475fbab-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.406291 4678 generic.go:334] "Generic (PLEG): container finished" podID="2e7def9d-7b7b-4ce9-a8eb-e5736e671100" containerID="4b23073c074a48f6e9e900563d1d17dc3c8f96e95a42fa5359bf4b1e63da140e" exitCode=0 Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.406426 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e7def9d-7b7b-4ce9-a8eb-e5736e671100","Type":"ContainerDied","Data":"4b23073c074a48f6e9e900563d1d17dc3c8f96e95a42fa5359bf4b1e63da140e"} Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.411556 4678 generic.go:334] "Generic (PLEG): container finished" podID="f881b8b7-c793-4eea-8a59-5095344fce59" containerID="9006cdc8b802f5a4126dd2f5eb2f548e98d008124214e5f8647700e6c8cdda53" exitCode=0 Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.411617 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f881b8b7-c793-4eea-8a59-5095344fce59","Type":"ContainerDied","Data":"9006cdc8b802f5a4126dd2f5eb2f548e98d008124214e5f8647700e6c8cdda53"} Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.419873 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mptk6" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.419889 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mptk6" event={"ID":"0a390c1f-e5b4-47a0-a9e8-a9979475fbab","Type":"ContainerDied","Data":"56f09f217d3afa7199039697e00027c2f08cb1f214ae4d1c3a76fe3f7f0c1daa"} Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.419953 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56f09f217d3afa7199039697e00027c2f08cb1f214ae4d1c3a76fe3f7f0c1daa" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.503837 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 11:39:45 crc kubenswrapper[4678]: E1124 11:39:45.504509 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a390c1f-e5b4-47a0-a9e8-a9979475fbab" containerName="nova-cell1-conductor-db-sync" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.504539 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a390c1f-e5b4-47a0-a9e8-a9979475fbab" containerName="nova-cell1-conductor-db-sync" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.504786 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a390c1f-e5b4-47a0-a9e8-a9979475fbab" containerName="nova-cell1-conductor-db-sync" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.505636 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.508651 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a85b96e-7419-42cd-80c4-e1d4ef411dee-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4a85b96e-7419-42cd-80c4-e1d4ef411dee\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.508712 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a85b96e-7419-42cd-80c4-e1d4ef411dee-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4a85b96e-7419-42cd-80c4-e1d4ef411dee\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.508780 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g77bx\" (UniqueName: \"kubernetes.io/projected/4a85b96e-7419-42cd-80c4-e1d4ef411dee-kube-api-access-g77bx\") pod \"nova-cell1-conductor-0\" (UID: \"4a85b96e-7419-42cd-80c4-e1d4ef411dee\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.519935 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.534840 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.619130 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a85b96e-7419-42cd-80c4-e1d4ef411dee-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4a85b96e-7419-42cd-80c4-e1d4ef411dee\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.619227 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a85b96e-7419-42cd-80c4-e1d4ef411dee-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4a85b96e-7419-42cd-80c4-e1d4ef411dee\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.620474 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g77bx\" (UniqueName: \"kubernetes.io/projected/4a85b96e-7419-42cd-80c4-e1d4ef411dee-kube-api-access-g77bx\") pod \"nova-cell1-conductor-0\" (UID: \"4a85b96e-7419-42cd-80c4-e1d4ef411dee\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.625971 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a85b96e-7419-42cd-80c4-e1d4ef411dee-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4a85b96e-7419-42cd-80c4-e1d4ef411dee\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.633686 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a85b96e-7419-42cd-80c4-e1d4ef411dee-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4a85b96e-7419-42cd-80c4-e1d4ef411dee\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.640655 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g77bx\" (UniqueName: \"kubernetes.io/projected/4a85b96e-7419-42cd-80c4-e1d4ef411dee-kube-api-access-g77bx\") pod \"nova-cell1-conductor-0\" (UID: \"4a85b96e-7419-42cd-80c4-e1d4ef411dee\") " pod="openstack/nova-cell1-conductor-0" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.777354 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.786117 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.825062 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.825281 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndfhr\" (UniqueName: \"kubernetes.io/projected/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-kube-api-access-ndfhr\") pod \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\" (UID: \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\") " Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.825785 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f881b8b7-c793-4eea-8a59-5095344fce59-combined-ca-bundle\") pod \"f881b8b7-c793-4eea-8a59-5095344fce59\" (UID: \"f881b8b7-c793-4eea-8a59-5095344fce59\") " Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.825821 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqr69\" (UniqueName: \"kubernetes.io/projected/f881b8b7-c793-4eea-8a59-5095344fce59-kube-api-access-nqr69\") pod \"f881b8b7-c793-4eea-8a59-5095344fce59\" (UID: \"f881b8b7-c793-4eea-8a59-5095344fce59\") " Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.825894 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-combined-ca-bundle\") pod \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\" (UID: \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\") " Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.825999 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-config-data\") pod \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\" (UID: \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\") " Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.826031 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-logs\") pod \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\" (UID: \"2e7def9d-7b7b-4ce9-a8eb-e5736e671100\") " Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.826111 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f881b8b7-c793-4eea-8a59-5095344fce59-config-data\") pod \"f881b8b7-c793-4eea-8a59-5095344fce59\" (UID: \"f881b8b7-c793-4eea-8a59-5095344fce59\") " Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.830784 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f881b8b7-c793-4eea-8a59-5095344fce59-kube-api-access-nqr69" (OuterVolumeSpecName: "kube-api-access-nqr69") pod "f881b8b7-c793-4eea-8a59-5095344fce59" (UID: "f881b8b7-c793-4eea-8a59-5095344fce59"). InnerVolumeSpecName "kube-api-access-nqr69". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.831282 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-logs" (OuterVolumeSpecName: "logs") pod "2e7def9d-7b7b-4ce9-a8eb-e5736e671100" (UID: "2e7def9d-7b7b-4ce9-a8eb-e5736e671100"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.833902 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-kube-api-access-ndfhr" (OuterVolumeSpecName: "kube-api-access-ndfhr") pod "2e7def9d-7b7b-4ce9-a8eb-e5736e671100" (UID: "2e7def9d-7b7b-4ce9-a8eb-e5736e671100"). InnerVolumeSpecName "kube-api-access-ndfhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.875351 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f881b8b7-c793-4eea-8a59-5095344fce59-config-data" (OuterVolumeSpecName: "config-data") pod "f881b8b7-c793-4eea-8a59-5095344fce59" (UID: "f881b8b7-c793-4eea-8a59-5095344fce59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.889083 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f881b8b7-c793-4eea-8a59-5095344fce59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f881b8b7-c793-4eea-8a59-5095344fce59" (UID: "f881b8b7-c793-4eea-8a59-5095344fce59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.902074 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-config-data" (OuterVolumeSpecName: "config-data") pod "2e7def9d-7b7b-4ce9-a8eb-e5736e671100" (UID: "2e7def9d-7b7b-4ce9-a8eb-e5736e671100"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.911614 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e7def9d-7b7b-4ce9-a8eb-e5736e671100" (UID: "2e7def9d-7b7b-4ce9-a8eb-e5736e671100"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.929849 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f881b8b7-c793-4eea-8a59-5095344fce59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.929984 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqr69\" (UniqueName: \"kubernetes.io/projected/f881b8b7-c793-4eea-8a59-5095344fce59-kube-api-access-nqr69\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.930030 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.930043 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.930052 4678 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.930060 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f881b8b7-c793-4eea-8a59-5095344fce59-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:45 crc kubenswrapper[4678]: I1124 11:39:45.930174 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndfhr\" (UniqueName: \"kubernetes.io/projected/2e7def9d-7b7b-4ce9-a8eb-e5736e671100-kube-api-access-ndfhr\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:46 crc kubenswrapper[4678]: W1124 11:39:46.348559 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a85b96e_7419_42cd_80c4_e1d4ef411dee.slice/crio-416476ca5467e58c8cb920d6008cfc3f1e35ece138a03c16c8e7a68e1682e7ab WatchSource:0}: Error finding container 416476ca5467e58c8cb920d6008cfc3f1e35ece138a03c16c8e7a68e1682e7ab: Status 404 returned error can't find the container with id 416476ca5467e58c8cb920d6008cfc3f1e35ece138a03c16c8e7a68e1682e7ab Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.369009 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.435696 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e7def9d-7b7b-4ce9-a8eb-e5736e671100","Type":"ContainerDied","Data":"007eb9fb66fdc02f0dad418f6d937280658ea69f3d43ffc44fb6c527dd4fd427"} Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.435751 4678 scope.go:117] "RemoveContainer" containerID="4b23073c074a48f6e9e900563d1d17dc3c8f96e95a42fa5359bf4b1e63da140e" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.435884 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.443459 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f881b8b7-c793-4eea-8a59-5095344fce59","Type":"ContainerDied","Data":"4e3ea862ce759879d499abafe94e45d12339ef44d2bfb88163838f2a0176aebf"} Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.443495 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.445903 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4a85b96e-7419-42cd-80c4-e1d4ef411dee","Type":"ContainerStarted","Data":"416476ca5467e58c8cb920d6008cfc3f1e35ece138a03c16c8e7a68e1682e7ab"} Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.604179 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.613701 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.625245 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.637907 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.648115 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:39:46 crc kubenswrapper[4678]: E1124 11:39:46.648627 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7def9d-7b7b-4ce9-a8eb-e5736e671100" containerName="nova-api-api" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.648639 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7def9d-7b7b-4ce9-a8eb-e5736e671100" containerName="nova-api-api" Nov 24 11:39:46 crc kubenswrapper[4678]: E1124 11:39:46.648708 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7def9d-7b7b-4ce9-a8eb-e5736e671100" containerName="nova-api-log" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.648715 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7def9d-7b7b-4ce9-a8eb-e5736e671100" containerName="nova-api-log" Nov 24 11:39:46 crc kubenswrapper[4678]: E1124 11:39:46.648748 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f881b8b7-c793-4eea-8a59-5095344fce59" containerName="nova-scheduler-scheduler" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.648758 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="f881b8b7-c793-4eea-8a59-5095344fce59" containerName="nova-scheduler-scheduler" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.648960 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="f881b8b7-c793-4eea-8a59-5095344fce59" containerName="nova-scheduler-scheduler" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.648993 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e7def9d-7b7b-4ce9-a8eb-e5736e671100" containerName="nova-api-log" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.649004 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e7def9d-7b7b-4ce9-a8eb-e5736e671100" containerName="nova-api-api" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.649798 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.656221 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.656600 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.677656 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.706714 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.759009 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-config-data\") pod \"nova-scheduler-0\" (UID: \"1e4b7173-e5ad-48ee-b578-4f67d6b0e832\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.759366 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-config-data\") pod \"nova-api-0\" (UID: \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\") " pod="openstack/nova-api-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.759541 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1e4b7173-e5ad-48ee-b578-4f67d6b0e832\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.759662 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-logs\") pod \"nova-api-0\" (UID: \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\") " pod="openstack/nova-api-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.759744 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\") " pod="openstack/nova-api-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.759788 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x7nw\" (UniqueName: \"kubernetes.io/projected/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-kube-api-access-6x7nw\") pod \"nova-api-0\" (UID: \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\") " pod="openstack/nova-api-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.759893 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qvws\" (UniqueName: \"kubernetes.io/projected/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-kube-api-access-6qvws\") pod \"nova-scheduler-0\" (UID: \"1e4b7173-e5ad-48ee-b578-4f67d6b0e832\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.768817 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.780969 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.862647 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-config-data\") pod \"nova-api-0\" (UID: \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\") " pod="openstack/nova-api-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.862740 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1e4b7173-e5ad-48ee-b578-4f67d6b0e832\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.862800 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-logs\") pod \"nova-api-0\" (UID: \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\") " pod="openstack/nova-api-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.862841 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\") " pod="openstack/nova-api-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.862862 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x7nw\" (UniqueName: \"kubernetes.io/projected/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-kube-api-access-6x7nw\") pod \"nova-api-0\" (UID: \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\") " pod="openstack/nova-api-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.862898 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qvws\" (UniqueName: \"kubernetes.io/projected/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-kube-api-access-6qvws\") pod \"nova-scheduler-0\" (UID: \"1e4b7173-e5ad-48ee-b578-4f67d6b0e832\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.862996 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-config-data\") pod \"nova-scheduler-0\" (UID: \"1e4b7173-e5ad-48ee-b578-4f67d6b0e832\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.863966 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-logs\") pod \"nova-api-0\" (UID: \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\") " pod="openstack/nova-api-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.869675 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\") " pod="openstack/nova-api-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.870630 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1e4b7173-e5ad-48ee-b578-4f67d6b0e832\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.875221 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-config-data\") pod \"nova-api-0\" (UID: \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\") " pod="openstack/nova-api-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.878921 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-config-data\") pod \"nova-scheduler-0\" (UID: \"1e4b7173-e5ad-48ee-b578-4f67d6b0e832\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.881349 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x7nw\" (UniqueName: \"kubernetes.io/projected/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-kube-api-access-6x7nw\") pod \"nova-api-0\" (UID: \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\") " pod="openstack/nova-api-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.883941 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qvws\" (UniqueName: \"kubernetes.io/projected/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-kube-api-access-6qvws\") pod \"nova-scheduler-0\" (UID: \"1e4b7173-e5ad-48ee-b578-4f67d6b0e832\") " pod="openstack/nova-scheduler-0" Nov 24 11:39:46 crc kubenswrapper[4678]: I1124 11:39:46.996658 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:39:47 crc kubenswrapper[4678]: I1124 11:39:47.041743 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:39:47 crc kubenswrapper[4678]: I1124 11:39:47.475832 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4a85b96e-7419-42cd-80c4-e1d4ef411dee","Type":"ContainerStarted","Data":"ce10ab5ef55d20b8ea1eee98c3f4728f697e3250565ecf51c787f07202fb9dcc"} Nov 24 11:39:47 crc kubenswrapper[4678]: I1124 11:39:47.476124 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 24 11:39:47 crc kubenswrapper[4678]: I1124 11:39:47.508012 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.507989271 podStartE2EDuration="2.507989271s" podCreationTimestamp="2025-11-24 11:39:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:39:47.495752114 +0000 UTC m=+1398.426811763" watchObservedRunningTime="2025-11-24 11:39:47.507989271 +0000 UTC m=+1398.439048930" Nov 24 11:39:47 crc kubenswrapper[4678]: I1124 11:39:47.761010 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 11:39:47 crc kubenswrapper[4678]: I1124 11:39:47.761075 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 11:39:47 crc kubenswrapper[4678]: I1124 11:39:47.910951 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e7def9d-7b7b-4ce9-a8eb-e5736e671100" path="/var/lib/kubelet/pods/2e7def9d-7b7b-4ce9-a8eb-e5736e671100/volumes" Nov 24 11:39:47 crc kubenswrapper[4678]: I1124 11:39:47.911808 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f881b8b7-c793-4eea-8a59-5095344fce59" path="/var/lib/kubelet/pods/f881b8b7-c793-4eea-8a59-5095344fce59/volumes" Nov 24 11:39:49 crc kubenswrapper[4678]: I1124 11:39:49.386581 4678 scope.go:117] "RemoveContainer" containerID="0b27460ca5f4c19bec1f79417c04b4806b15761578c31189d2483df4b4810c52" Nov 24 11:39:49 crc kubenswrapper[4678]: I1124 11:39:49.484977 4678 scope.go:117] "RemoveContainer" containerID="9006cdc8b802f5a4126dd2f5eb2f548e98d008124214e5f8647700e6c8cdda53" Nov 24 11:39:50 crc kubenswrapper[4678]: I1124 11:39:50.511172 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rzwfw" event={"ID":"5410b784-9693-43c3-9f8a-43084f540dc6","Type":"ContainerStarted","Data":"40e348254e60236776df3d800722a35a5faea143fa15615ee31792077462433e"} Nov 24 11:39:50 crc kubenswrapper[4678]: I1124 11:39:50.535834 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-rzwfw" podStartSLOduration=2.296702156 podStartE2EDuration="7.535815404s" podCreationTimestamp="2025-11-24 11:39:43 +0000 UTC" firstStartedPulling="2025-11-24 11:39:44.256339803 +0000 UTC m=+1395.187399442" lastFinishedPulling="2025-11-24 11:39:49.495453051 +0000 UTC m=+1400.426512690" observedRunningTime="2025-11-24 11:39:50.525663832 +0000 UTC m=+1401.456723471" watchObservedRunningTime="2025-11-24 11:39:50.535815404 +0000 UTC m=+1401.466875043" Nov 24 11:39:50 crc kubenswrapper[4678]: I1124 11:39:50.626619 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:39:50 crc kubenswrapper[4678]: W1124 11:39:50.630432 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e4b7173_e5ad_48ee_b578_4f67d6b0e832.slice/crio-63d002fe4533aa03dfd51fb09902049415459d234adae681e1334f5664cea3d7 WatchSource:0}: Error finding container 63d002fe4533aa03dfd51fb09902049415459d234adae681e1334f5664cea3d7: Status 404 returned error can't find the container with id 63d002fe4533aa03dfd51fb09902049415459d234adae681e1334f5664cea3d7 Nov 24 11:39:50 crc kubenswrapper[4678]: I1124 11:39:50.642097 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:39:51 crc kubenswrapper[4678]: I1124 11:39:51.530781 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9","Type":"ContainerStarted","Data":"f9fda07d71c01d8f8da65391cd714450ac3834e7ac70a729dda1eaebe66da912"} Nov 24 11:39:51 crc kubenswrapper[4678]: I1124 11:39:51.532249 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9","Type":"ContainerStarted","Data":"c575075d7211e590cbab830e1f6c7d550e6e5dbad978862e641bb1035eb87bd3"} Nov 24 11:39:51 crc kubenswrapper[4678]: I1124 11:39:51.532317 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9","Type":"ContainerStarted","Data":"f89038e0780f006cf40def0f50436d3ee888dfd71ef4b83479cecb3c3ec16c4f"} Nov 24 11:39:51 crc kubenswrapper[4678]: I1124 11:39:51.537098 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1e4b7173-e5ad-48ee-b578-4f67d6b0e832","Type":"ContainerStarted","Data":"7d667acb5cbb9eccf25d03dc036db657dcdae7e6f2b018bdf529e4f5ccadc2d1"} Nov 24 11:39:51 crc kubenswrapper[4678]: I1124 11:39:51.537152 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1e4b7173-e5ad-48ee-b578-4f67d6b0e832","Type":"ContainerStarted","Data":"63d002fe4533aa03dfd51fb09902049415459d234adae681e1334f5664cea3d7"} Nov 24 11:39:51 crc kubenswrapper[4678]: I1124 11:39:51.560313 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.560297091 podStartE2EDuration="5.560297091s" podCreationTimestamp="2025-11-24 11:39:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:39:51.548321231 +0000 UTC m=+1402.479380910" watchObservedRunningTime="2025-11-24 11:39:51.560297091 +0000 UTC m=+1402.491356730" Nov 24 11:39:51 crc kubenswrapper[4678]: I1124 11:39:51.573468 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=5.573452713 podStartE2EDuration="5.573452713s" podCreationTimestamp="2025-11-24 11:39:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:39:51.567546385 +0000 UTC m=+1402.498606024" watchObservedRunningTime="2025-11-24 11:39:51.573452713 +0000 UTC m=+1402.504512352" Nov 24 11:39:51 crc kubenswrapper[4678]: I1124 11:39:51.997104 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 11:39:52 crc kubenswrapper[4678]: I1124 11:39:52.550515 4678 generic.go:334] "Generic (PLEG): container finished" podID="5410b784-9693-43c3-9f8a-43084f540dc6" containerID="40e348254e60236776df3d800722a35a5faea143fa15615ee31792077462433e" exitCode=0 Nov 24 11:39:52 crc kubenswrapper[4678]: I1124 11:39:52.551347 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rzwfw" event={"ID":"5410b784-9693-43c3-9f8a-43084f540dc6","Type":"ContainerDied","Data":"40e348254e60236776df3d800722a35a5faea143fa15615ee31792077462433e"} Nov 24 11:39:52 crc kubenswrapper[4678]: I1124 11:39:52.761212 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 11:39:52 crc kubenswrapper[4678]: I1124 11:39:52.761563 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 11:39:53 crc kubenswrapper[4678]: I1124 11:39:53.774880 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="722acbe1-a292-43be-88ea-7759fb793035" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.241:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:39:53 crc kubenswrapper[4678]: I1124 11:39:53.774922 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="722acbe1-a292-43be-88ea-7759fb793035" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.241:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:39:54 crc kubenswrapper[4678]: I1124 11:39:54.039453 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rzwfw" Nov 24 11:39:54 crc kubenswrapper[4678]: I1124 11:39:54.071465 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-config-data\") pod \"5410b784-9693-43c3-9f8a-43084f540dc6\" (UID: \"5410b784-9693-43c3-9f8a-43084f540dc6\") " Nov 24 11:39:54 crc kubenswrapper[4678]: I1124 11:39:54.071569 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-scripts\") pod \"5410b784-9693-43c3-9f8a-43084f540dc6\" (UID: \"5410b784-9693-43c3-9f8a-43084f540dc6\") " Nov 24 11:39:54 crc kubenswrapper[4678]: I1124 11:39:54.071599 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft9jw\" (UniqueName: \"kubernetes.io/projected/5410b784-9693-43c3-9f8a-43084f540dc6-kube-api-access-ft9jw\") pod \"5410b784-9693-43c3-9f8a-43084f540dc6\" (UID: \"5410b784-9693-43c3-9f8a-43084f540dc6\") " Nov 24 11:39:54 crc kubenswrapper[4678]: I1124 11:39:54.071623 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-combined-ca-bundle\") pod \"5410b784-9693-43c3-9f8a-43084f540dc6\" (UID: \"5410b784-9693-43c3-9f8a-43084f540dc6\") " Nov 24 11:39:54 crc kubenswrapper[4678]: I1124 11:39:54.079339 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-scripts" (OuterVolumeSpecName: "scripts") pod "5410b784-9693-43c3-9f8a-43084f540dc6" (UID: "5410b784-9693-43c3-9f8a-43084f540dc6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:54 crc kubenswrapper[4678]: I1124 11:39:54.081865 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5410b784-9693-43c3-9f8a-43084f540dc6-kube-api-access-ft9jw" (OuterVolumeSpecName: "kube-api-access-ft9jw") pod "5410b784-9693-43c3-9f8a-43084f540dc6" (UID: "5410b784-9693-43c3-9f8a-43084f540dc6"). InnerVolumeSpecName "kube-api-access-ft9jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:39:54 crc kubenswrapper[4678]: I1124 11:39:54.123508 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5410b784-9693-43c3-9f8a-43084f540dc6" (UID: "5410b784-9693-43c3-9f8a-43084f540dc6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:54 crc kubenswrapper[4678]: I1124 11:39:54.131938 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-config-data" (OuterVolumeSpecName: "config-data") pod "5410b784-9693-43c3-9f8a-43084f540dc6" (UID: "5410b784-9693-43c3-9f8a-43084f540dc6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:39:54 crc kubenswrapper[4678]: I1124 11:39:54.176028 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:54 crc kubenswrapper[4678]: I1124 11:39:54.176090 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:54 crc kubenswrapper[4678]: I1124 11:39:54.176108 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft9jw\" (UniqueName: \"kubernetes.io/projected/5410b784-9693-43c3-9f8a-43084f540dc6-kube-api-access-ft9jw\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:54 crc kubenswrapper[4678]: I1124 11:39:54.176150 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5410b784-9693-43c3-9f8a-43084f540dc6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:39:54 crc kubenswrapper[4678]: I1124 11:39:54.587427 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rzwfw" event={"ID":"5410b784-9693-43c3-9f8a-43084f540dc6","Type":"ContainerDied","Data":"8813944544d8e609870842206890aa30cb2bee1b9a46bf080ab02ceeafe7980a"} Nov 24 11:39:54 crc kubenswrapper[4678]: I1124 11:39:54.587470 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8813944544d8e609870842206890aa30cb2bee1b9a46bf080ab02ceeafe7980a" Nov 24 11:39:54 crc kubenswrapper[4678]: I1124 11:39:54.587529 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rzwfw" Nov 24 11:39:55 crc kubenswrapper[4678]: I1124 11:39:55.864060 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 24 11:39:56 crc kubenswrapper[4678]: I1124 11:39:56.997070 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 11:39:57 crc kubenswrapper[4678]: I1124 11:39:57.030799 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 11:39:57 crc kubenswrapper[4678]: I1124 11:39:57.042937 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:39:57 crc kubenswrapper[4678]: I1124 11:39:57.043016 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:39:57 crc kubenswrapper[4678]: I1124 11:39:57.663738 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.126895 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.245:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.126971 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.245:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.208054 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 24 11:39:58 crc kubenswrapper[4678]: E1124 11:39:58.208916 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5410b784-9693-43c3-9f8a-43084f540dc6" containerName="aodh-db-sync" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.208939 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5410b784-9693-43c3-9f8a-43084f540dc6" containerName="aodh-db-sync" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.209244 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="5410b784-9693-43c3-9f8a-43084f540dc6" containerName="aodh-db-sync" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.212995 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.218464 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.218628 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.221336 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bwbmq" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.241960 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.278578 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-config-data\") pod \"aodh-0\" (UID: \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\") " pod="openstack/aodh-0" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.278725 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vzhk\" (UniqueName: \"kubernetes.io/projected/db4ec7ad-4c52-4fe5-b298-29a526184c2a-kube-api-access-8vzhk\") pod \"aodh-0\" (UID: \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\") " pod="openstack/aodh-0" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.278796 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\") " pod="openstack/aodh-0" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.278832 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-scripts\") pod \"aodh-0\" (UID: \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\") " pod="openstack/aodh-0" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.380875 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\") " pod="openstack/aodh-0" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.381312 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-scripts\") pod \"aodh-0\" (UID: \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\") " pod="openstack/aodh-0" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.381534 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-config-data\") pod \"aodh-0\" (UID: \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\") " pod="openstack/aodh-0" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.381592 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vzhk\" (UniqueName: \"kubernetes.io/projected/db4ec7ad-4c52-4fe5-b298-29a526184c2a-kube-api-access-8vzhk\") pod \"aodh-0\" (UID: \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\") " pod="openstack/aodh-0" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.392519 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-config-data\") pod \"aodh-0\" (UID: \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\") " pod="openstack/aodh-0" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.394893 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-scripts\") pod \"aodh-0\" (UID: \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\") " pod="openstack/aodh-0" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.407398 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\") " pod="openstack/aodh-0" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.412156 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vzhk\" (UniqueName: \"kubernetes.io/projected/db4ec7ad-4c52-4fe5-b298-29a526184c2a-kube-api-access-8vzhk\") pod \"aodh-0\" (UID: \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\") " pod="openstack/aodh-0" Nov 24 11:39:58 crc kubenswrapper[4678]: I1124 11:39:58.539345 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 11:39:59 crc kubenswrapper[4678]: I1124 11:39:59.454482 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 11:39:59 crc kubenswrapper[4678]: I1124 11:39:59.652942 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db4ec7ad-4c52-4fe5-b298-29a526184c2a","Type":"ContainerStarted","Data":"267b02f03caba44314b7dc857333595aa7eb364a47337a53d1edde1f092e9a3d"} Nov 24 11:40:00 crc kubenswrapper[4678]: I1124 11:40:00.302387 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:40:00 crc kubenswrapper[4678]: I1124 11:40:00.302718 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:40:00 crc kubenswrapper[4678]: I1124 11:40:00.667134 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db4ec7ad-4c52-4fe5-b298-29a526184c2a","Type":"ContainerStarted","Data":"c922df9ccae76f28dea5e2dec204385b587b6d6fd167f7cc3fb58d4ae02e8e7b"} Nov 24 11:40:00 crc kubenswrapper[4678]: I1124 11:40:00.979607 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:40:00 crc kubenswrapper[4678]: I1124 11:40:00.982021 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="ceilometer-central-agent" containerID="cri-o://30aa677da80e841cfdebe3eaad25c79f64b609b257b8c1f0771bd0a25ec932b9" gracePeriod=30 Nov 24 11:40:00 crc kubenswrapper[4678]: I1124 11:40:00.983882 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="proxy-httpd" containerID="cri-o://6162089d75b4017d7c20697be115834a83c74a88213615693a8f2ac904c5a58d" gracePeriod=30 Nov 24 11:40:00 crc kubenswrapper[4678]: I1124 11:40:00.983987 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="ceilometer-notification-agent" containerID="cri-o://185e539d51d81655454c6d0375198a7b0b250642ed353a2c5be01f3c12d55e7d" gracePeriod=30 Nov 24 11:40:00 crc kubenswrapper[4678]: I1124 11:40:00.984018 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="sg-core" containerID="cri-o://48a2df694252fe0a56a19ec06e5c54ab03e790211539821607f07a564d2d79f1" gracePeriod=30 Nov 24 11:40:01 crc kubenswrapper[4678]: I1124 11:40:01.009405 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.237:3000/\": EOF" Nov 24 11:40:01 crc kubenswrapper[4678]: I1124 11:40:01.696331 4678 generic.go:334] "Generic (PLEG): container finished" podID="68a77cae-074a-4561-9b14-16b07e793d63" containerID="6162089d75b4017d7c20697be115834a83c74a88213615693a8f2ac904c5a58d" exitCode=0 Nov 24 11:40:01 crc kubenswrapper[4678]: I1124 11:40:01.696634 4678 generic.go:334] "Generic (PLEG): container finished" podID="68a77cae-074a-4561-9b14-16b07e793d63" containerID="48a2df694252fe0a56a19ec06e5c54ab03e790211539821607f07a564d2d79f1" exitCode=2 Nov 24 11:40:01 crc kubenswrapper[4678]: I1124 11:40:01.696654 4678 generic.go:334] "Generic (PLEG): container finished" podID="68a77cae-074a-4561-9b14-16b07e793d63" containerID="30aa677da80e841cfdebe3eaad25c79f64b609b257b8c1f0771bd0a25ec932b9" exitCode=0 Nov 24 11:40:01 crc kubenswrapper[4678]: I1124 11:40:01.696494 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68a77cae-074a-4561-9b14-16b07e793d63","Type":"ContainerDied","Data":"6162089d75b4017d7c20697be115834a83c74a88213615693a8f2ac904c5a58d"} Nov 24 11:40:01 crc kubenswrapper[4678]: I1124 11:40:01.696729 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68a77cae-074a-4561-9b14-16b07e793d63","Type":"ContainerDied","Data":"48a2df694252fe0a56a19ec06e5c54ab03e790211539821607f07a564d2d79f1"} Nov 24 11:40:01 crc kubenswrapper[4678]: I1124 11:40:01.696749 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68a77cae-074a-4561-9b14-16b07e793d63","Type":"ContainerDied","Data":"30aa677da80e841cfdebe3eaad25c79f64b609b257b8c1f0771bd0a25ec932b9"} Nov 24 11:40:01 crc kubenswrapper[4678]: I1124 11:40:01.929935 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 24 11:40:02 crc kubenswrapper[4678]: I1124 11:40:02.717264 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db4ec7ad-4c52-4fe5-b298-29a526184c2a","Type":"ContainerStarted","Data":"0d6f607cbc91f48c23bf550b187b2168ee391ed884fbb797369b278d5eef0ca8"} Nov 24 11:40:02 crc kubenswrapper[4678]: I1124 11:40:02.769345 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 11:40:02 crc kubenswrapper[4678]: I1124 11:40:02.770308 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 11:40:02 crc kubenswrapper[4678]: I1124 11:40:02.827959 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 11:40:03 crc kubenswrapper[4678]: I1124 11:40:03.734442 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db4ec7ad-4c52-4fe5-b298-29a526184c2a","Type":"ContainerStarted","Data":"09c68769805841926934d3de1ff3eb1c0c3bb4eb0caefc61d6495abff8f0c1af"} Nov 24 11:40:03 crc kubenswrapper[4678]: I1124 11:40:03.743422 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 11:40:04 crc kubenswrapper[4678]: I1124 11:40:04.525564 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:04 crc kubenswrapper[4678]: I1124 11:40:04.707376 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp45r\" (UniqueName: \"kubernetes.io/projected/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-kube-api-access-bp45r\") pod \"2ce4708c-9dae-4c99-95c9-9ea6c62304c1\" (UID: \"2ce4708c-9dae-4c99-95c9-9ea6c62304c1\") " Nov 24 11:40:04 crc kubenswrapper[4678]: I1124 11:40:04.707760 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-config-data\") pod \"2ce4708c-9dae-4c99-95c9-9ea6c62304c1\" (UID: \"2ce4708c-9dae-4c99-95c9-9ea6c62304c1\") " Nov 24 11:40:04 crc kubenswrapper[4678]: I1124 11:40:04.707877 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-combined-ca-bundle\") pod \"2ce4708c-9dae-4c99-95c9-9ea6c62304c1\" (UID: \"2ce4708c-9dae-4c99-95c9-9ea6c62304c1\") " Nov 24 11:40:04 crc kubenswrapper[4678]: I1124 11:40:04.715523 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-kube-api-access-bp45r" (OuterVolumeSpecName: "kube-api-access-bp45r") pod "2ce4708c-9dae-4c99-95c9-9ea6c62304c1" (UID: "2ce4708c-9dae-4c99-95c9-9ea6c62304c1"). InnerVolumeSpecName "kube-api-access-bp45r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:40:04 crc kubenswrapper[4678]: I1124 11:40:04.746177 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ce4708c-9dae-4c99-95c9-9ea6c62304c1" (UID: "2ce4708c-9dae-4c99-95c9-9ea6c62304c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:04 crc kubenswrapper[4678]: I1124 11:40:04.747331 4678 generic.go:334] "Generic (PLEG): container finished" podID="2ce4708c-9dae-4c99-95c9-9ea6c62304c1" containerID="0b2fedda53c1f54677620332e09af28b054980461994f69e89c2358c219c443c" exitCode=137 Nov 24 11:40:04 crc kubenswrapper[4678]: I1124 11:40:04.747811 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:04 crc kubenswrapper[4678]: I1124 11:40:04.748837 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2ce4708c-9dae-4c99-95c9-9ea6c62304c1","Type":"ContainerDied","Data":"0b2fedda53c1f54677620332e09af28b054980461994f69e89c2358c219c443c"} Nov 24 11:40:04 crc kubenswrapper[4678]: I1124 11:40:04.748980 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2ce4708c-9dae-4c99-95c9-9ea6c62304c1","Type":"ContainerDied","Data":"f5341d1ed1c20d47b9524824f6185723f8d99350bbcf1bbd7d08265f8c7ab87f"} Nov 24 11:40:04 crc kubenswrapper[4678]: I1124 11:40:04.749053 4678 scope.go:117] "RemoveContainer" containerID="0b2fedda53c1f54677620332e09af28b054980461994f69e89c2358c219c443c" Nov 24 11:40:04 crc kubenswrapper[4678]: I1124 11:40:04.767817 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-config-data" (OuterVolumeSpecName: "config-data") pod "2ce4708c-9dae-4c99-95c9-9ea6c62304c1" (UID: "2ce4708c-9dae-4c99-95c9-9ea6c62304c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:04 crc kubenswrapper[4678]: I1124 11:40:04.811539 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp45r\" (UniqueName: \"kubernetes.io/projected/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-kube-api-access-bp45r\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:04 crc kubenswrapper[4678]: I1124 11:40:04.811851 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:04 crc kubenswrapper[4678]: I1124 11:40:04.811911 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce4708c-9dae-4c99-95c9-9ea6c62304c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.100360 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.118693 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.137139 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:40:05 crc kubenswrapper[4678]: E1124 11:40:05.138160 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce4708c-9dae-4c99-95c9-9ea6c62304c1" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.138192 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce4708c-9dae-4c99-95c9-9ea6c62304c1" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.138574 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce4708c-9dae-4c99-95c9-9ea6c62304c1" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.140041 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.143358 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.143851 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.144110 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.160171 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.171722 4678 scope.go:117] "RemoveContainer" containerID="0b2fedda53c1f54677620332e09af28b054980461994f69e89c2358c219c443c" Nov 24 11:40:05 crc kubenswrapper[4678]: E1124 11:40:05.172347 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b2fedda53c1f54677620332e09af28b054980461994f69e89c2358c219c443c\": container with ID starting with 0b2fedda53c1f54677620332e09af28b054980461994f69e89c2358c219c443c not found: ID does not exist" containerID="0b2fedda53c1f54677620332e09af28b054980461994f69e89c2358c219c443c" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.172390 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2fedda53c1f54677620332e09af28b054980461994f69e89c2358c219c443c"} err="failed to get container status \"0b2fedda53c1f54677620332e09af28b054980461994f69e89c2358c219c443c\": rpc error: code = NotFound desc = could not find container \"0b2fedda53c1f54677620332e09af28b054980461994f69e89c2358c219c443c\": container with ID starting with 0b2fedda53c1f54677620332e09af28b054980461994f69e89c2358c219c443c not found: ID does not exist" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.222822 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86fd0d08-2581-4fda-a843-7ed2b3b7f756-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"86fd0d08-2581-4fda-a843-7ed2b3b7f756\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.222930 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twjd9\" (UniqueName: \"kubernetes.io/projected/86fd0d08-2581-4fda-a843-7ed2b3b7f756-kube-api-access-twjd9\") pod \"nova-cell1-novncproxy-0\" (UID: \"86fd0d08-2581-4fda-a843-7ed2b3b7f756\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.222976 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/86fd0d08-2581-4fda-a843-7ed2b3b7f756-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"86fd0d08-2581-4fda-a843-7ed2b3b7f756\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.224075 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/86fd0d08-2581-4fda-a843-7ed2b3b7f756-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"86fd0d08-2581-4fda-a843-7ed2b3b7f756\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.224285 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86fd0d08-2581-4fda-a843-7ed2b3b7f756-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"86fd0d08-2581-4fda-a843-7ed2b3b7f756\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.326424 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86fd0d08-2581-4fda-a843-7ed2b3b7f756-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"86fd0d08-2581-4fda-a843-7ed2b3b7f756\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.326936 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twjd9\" (UniqueName: \"kubernetes.io/projected/86fd0d08-2581-4fda-a843-7ed2b3b7f756-kube-api-access-twjd9\") pod \"nova-cell1-novncproxy-0\" (UID: \"86fd0d08-2581-4fda-a843-7ed2b3b7f756\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.326995 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/86fd0d08-2581-4fda-a843-7ed2b3b7f756-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"86fd0d08-2581-4fda-a843-7ed2b3b7f756\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.327198 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/86fd0d08-2581-4fda-a843-7ed2b3b7f756-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"86fd0d08-2581-4fda-a843-7ed2b3b7f756\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.327262 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86fd0d08-2581-4fda-a843-7ed2b3b7f756-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"86fd0d08-2581-4fda-a843-7ed2b3b7f756\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.332020 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86fd0d08-2581-4fda-a843-7ed2b3b7f756-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"86fd0d08-2581-4fda-a843-7ed2b3b7f756\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.332792 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/86fd0d08-2581-4fda-a843-7ed2b3b7f756-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"86fd0d08-2581-4fda-a843-7ed2b3b7f756\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.332859 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/86fd0d08-2581-4fda-a843-7ed2b3b7f756-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"86fd0d08-2581-4fda-a843-7ed2b3b7f756\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.335270 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86fd0d08-2581-4fda-a843-7ed2b3b7f756-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"86fd0d08-2581-4fda-a843-7ed2b3b7f756\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.345764 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twjd9\" (UniqueName: \"kubernetes.io/projected/86fd0d08-2581-4fda-a843-7ed2b3b7f756-kube-api-access-twjd9\") pod \"nova-cell1-novncproxy-0\" (UID: \"86fd0d08-2581-4fda-a843-7ed2b3b7f756\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.462338 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.509335 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.237:3000/\": dial tcp 10.217.0.237:3000: connect: connection refused" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.781767 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db4ec7ad-4c52-4fe5-b298-29a526184c2a","Type":"ContainerStarted","Data":"96c531c4a57a2de56b3b6fa821d3cc8e221a68f6ff85ec020fa9f8c7fb238f5a"} Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.782500 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerName="aodh-listener" containerID="cri-o://96c531c4a57a2de56b3b6fa821d3cc8e221a68f6ff85ec020fa9f8c7fb238f5a" gracePeriod=30 Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.783536 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerName="aodh-api" containerID="cri-o://c922df9ccae76f28dea5e2dec204385b587b6d6fd167f7cc3fb58d4ae02e8e7b" gracePeriod=30 Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.783617 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerName="aodh-notifier" containerID="cri-o://09c68769805841926934d3de1ff3eb1c0c3bb4eb0caefc61d6495abff8f0c1af" gracePeriod=30 Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.783683 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerName="aodh-evaluator" containerID="cri-o://0d6f607cbc91f48c23bf550b187b2168ee391ed884fbb797369b278d5eef0ca8" gracePeriod=30 Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.829821 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.0971457 podStartE2EDuration="7.829789047s" podCreationTimestamp="2025-11-24 11:39:58 +0000 UTC" firstStartedPulling="2025-11-24 11:39:59.468984461 +0000 UTC m=+1410.400044100" lastFinishedPulling="2025-11-24 11:40:05.201627808 +0000 UTC m=+1416.132687447" observedRunningTime="2025-11-24 11:40:05.821568697 +0000 UTC m=+1416.752628336" watchObservedRunningTime="2025-11-24 11:40:05.829789047 +0000 UTC m=+1416.760848686" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.917591 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce4708c-9dae-4c99-95c9-9ea6c62304c1" path="/var/lib/kubelet/pods/2ce4708c-9dae-4c99-95c9-9ea6c62304c1/volumes" Nov 24 11:40:05 crc kubenswrapper[4678]: I1124 11:40:05.963157 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 11:40:06 crc kubenswrapper[4678]: W1124 11:40:06.033179 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86fd0d08_2581_4fda_a843_7ed2b3b7f756.slice/crio-81b109a15d03bbbcbecfad161355ca2ece2718d78b25306778a5b628c289788b WatchSource:0}: Error finding container 81b109a15d03bbbcbecfad161355ca2ece2718d78b25306778a5b628c289788b: Status 404 returned error can't find the container with id 81b109a15d03bbbcbecfad161355ca2ece2718d78b25306778a5b628c289788b Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.595954 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.673963 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s6qj\" (UniqueName: \"kubernetes.io/projected/68a77cae-074a-4561-9b14-16b07e793d63-kube-api-access-7s6qj\") pod \"68a77cae-074a-4561-9b14-16b07e793d63\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.674178 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-scripts\") pod \"68a77cae-074a-4561-9b14-16b07e793d63\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.674218 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-config-data\") pod \"68a77cae-074a-4561-9b14-16b07e793d63\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.674310 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-sg-core-conf-yaml\") pod \"68a77cae-074a-4561-9b14-16b07e793d63\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.674385 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68a77cae-074a-4561-9b14-16b07e793d63-log-httpd\") pod \"68a77cae-074a-4561-9b14-16b07e793d63\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.674992 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68a77cae-074a-4561-9b14-16b07e793d63-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "68a77cae-074a-4561-9b14-16b07e793d63" (UID: "68a77cae-074a-4561-9b14-16b07e793d63"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.675129 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68a77cae-074a-4561-9b14-16b07e793d63-run-httpd\") pod \"68a77cae-074a-4561-9b14-16b07e793d63\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.675203 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-combined-ca-bundle\") pod \"68a77cae-074a-4561-9b14-16b07e793d63\" (UID: \"68a77cae-074a-4561-9b14-16b07e793d63\") " Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.675412 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68a77cae-074a-4561-9b14-16b07e793d63-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "68a77cae-074a-4561-9b14-16b07e793d63" (UID: "68a77cae-074a-4561-9b14-16b07e793d63"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.676240 4678 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68a77cae-074a-4561-9b14-16b07e793d63-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.676269 4678 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68a77cae-074a-4561-9b14-16b07e793d63-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.679313 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68a77cae-074a-4561-9b14-16b07e793d63-kube-api-access-7s6qj" (OuterVolumeSpecName: "kube-api-access-7s6qj") pod "68a77cae-074a-4561-9b14-16b07e793d63" (UID: "68a77cae-074a-4561-9b14-16b07e793d63"). InnerVolumeSpecName "kube-api-access-7s6qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.680080 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-scripts" (OuterVolumeSpecName: "scripts") pod "68a77cae-074a-4561-9b14-16b07e793d63" (UID: "68a77cae-074a-4561-9b14-16b07e793d63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.716659 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "68a77cae-074a-4561-9b14-16b07e793d63" (UID: "68a77cae-074a-4561-9b14-16b07e793d63"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.779047 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s6qj\" (UniqueName: \"kubernetes.io/projected/68a77cae-074a-4561-9b14-16b07e793d63-kube-api-access-7s6qj\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.779082 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.779094 4678 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.784502 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68a77cae-074a-4561-9b14-16b07e793d63" (UID: "68a77cae-074a-4561-9b14-16b07e793d63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.804029 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"86fd0d08-2581-4fda-a843-7ed2b3b7f756","Type":"ContainerStarted","Data":"541129657e0e9f03ffe27b39ace1d2db0a7e77d7f06a6c3570ac8bfc06a82057"} Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.804087 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"86fd0d08-2581-4fda-a843-7ed2b3b7f756","Type":"ContainerStarted","Data":"81b109a15d03bbbcbecfad161355ca2ece2718d78b25306778a5b628c289788b"} Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.814663 4678 generic.go:334] "Generic (PLEG): container finished" podID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerID="09c68769805841926934d3de1ff3eb1c0c3bb4eb0caefc61d6495abff8f0c1af" exitCode=0 Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.814727 4678 generic.go:334] "Generic (PLEG): container finished" podID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerID="0d6f607cbc91f48c23bf550b187b2168ee391ed884fbb797369b278d5eef0ca8" exitCode=0 Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.814735 4678 generic.go:334] "Generic (PLEG): container finished" podID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerID="c922df9ccae76f28dea5e2dec204385b587b6d6fd167f7cc3fb58d4ae02e8e7b" exitCode=0 Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.814785 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db4ec7ad-4c52-4fe5-b298-29a526184c2a","Type":"ContainerDied","Data":"09c68769805841926934d3de1ff3eb1c0c3bb4eb0caefc61d6495abff8f0c1af"} Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.814814 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db4ec7ad-4c52-4fe5-b298-29a526184c2a","Type":"ContainerDied","Data":"0d6f607cbc91f48c23bf550b187b2168ee391ed884fbb797369b278d5eef0ca8"} Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.814823 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db4ec7ad-4c52-4fe5-b298-29a526184c2a","Type":"ContainerDied","Data":"c922df9ccae76f28dea5e2dec204385b587b6d6fd167f7cc3fb58d4ae02e8e7b"} Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.817604 4678 generic.go:334] "Generic (PLEG): container finished" podID="68a77cae-074a-4561-9b14-16b07e793d63" containerID="185e539d51d81655454c6d0375198a7b0b250642ed353a2c5be01f3c12d55e7d" exitCode=0 Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.817634 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68a77cae-074a-4561-9b14-16b07e793d63","Type":"ContainerDied","Data":"185e539d51d81655454c6d0375198a7b0b250642ed353a2c5be01f3c12d55e7d"} Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.817689 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68a77cae-074a-4561-9b14-16b07e793d63","Type":"ContainerDied","Data":"798032cd712adbdc6ab7f5216a82ae426965d4abe2992c0f6d1549e9cea3e4d4"} Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.817710 4678 scope.go:117] "RemoveContainer" containerID="6162089d75b4017d7c20697be115834a83c74a88213615693a8f2ac904c5a58d" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.817921 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.863339 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-config-data" (OuterVolumeSpecName: "config-data") pod "68a77cae-074a-4561-9b14-16b07e793d63" (UID: "68a77cae-074a-4561-9b14-16b07e793d63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.881243 4678 scope.go:117] "RemoveContainer" containerID="48a2df694252fe0a56a19ec06e5c54ab03e790211539821607f07a564d2d79f1" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.888837 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.887257607 podStartE2EDuration="1.887257607s" podCreationTimestamp="2025-11-24 11:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:40:06.83951581 +0000 UTC m=+1417.770575469" watchObservedRunningTime="2025-11-24 11:40:06.887257607 +0000 UTC m=+1417.818317246" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.898076 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.898134 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68a77cae-074a-4561-9b14-16b07e793d63-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.958599 4678 scope.go:117] "RemoveContainer" containerID="185e539d51d81655454c6d0375198a7b0b250642ed353a2c5be01f3c12d55e7d" Nov 24 11:40:06 crc kubenswrapper[4678]: I1124 11:40:06.994076 4678 scope.go:117] "RemoveContainer" containerID="30aa677da80e841cfdebe3eaad25c79f64b609b257b8c1f0771bd0a25ec932b9" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.017074 4678 scope.go:117] "RemoveContainer" containerID="6162089d75b4017d7c20697be115834a83c74a88213615693a8f2ac904c5a58d" Nov 24 11:40:07 crc kubenswrapper[4678]: E1124 11:40:07.017506 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6162089d75b4017d7c20697be115834a83c74a88213615693a8f2ac904c5a58d\": container with ID starting with 6162089d75b4017d7c20697be115834a83c74a88213615693a8f2ac904c5a58d not found: ID does not exist" containerID="6162089d75b4017d7c20697be115834a83c74a88213615693a8f2ac904c5a58d" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.017536 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6162089d75b4017d7c20697be115834a83c74a88213615693a8f2ac904c5a58d"} err="failed to get container status \"6162089d75b4017d7c20697be115834a83c74a88213615693a8f2ac904c5a58d\": rpc error: code = NotFound desc = could not find container \"6162089d75b4017d7c20697be115834a83c74a88213615693a8f2ac904c5a58d\": container with ID starting with 6162089d75b4017d7c20697be115834a83c74a88213615693a8f2ac904c5a58d not found: ID does not exist" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.017556 4678 scope.go:117] "RemoveContainer" containerID="48a2df694252fe0a56a19ec06e5c54ab03e790211539821607f07a564d2d79f1" Nov 24 11:40:07 crc kubenswrapper[4678]: E1124 11:40:07.017920 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48a2df694252fe0a56a19ec06e5c54ab03e790211539821607f07a564d2d79f1\": container with ID starting with 48a2df694252fe0a56a19ec06e5c54ab03e790211539821607f07a564d2d79f1 not found: ID does not exist" containerID="48a2df694252fe0a56a19ec06e5c54ab03e790211539821607f07a564d2d79f1" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.017959 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a2df694252fe0a56a19ec06e5c54ab03e790211539821607f07a564d2d79f1"} err="failed to get container status \"48a2df694252fe0a56a19ec06e5c54ab03e790211539821607f07a564d2d79f1\": rpc error: code = NotFound desc = could not find container \"48a2df694252fe0a56a19ec06e5c54ab03e790211539821607f07a564d2d79f1\": container with ID starting with 48a2df694252fe0a56a19ec06e5c54ab03e790211539821607f07a564d2d79f1 not found: ID does not exist" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.017977 4678 scope.go:117] "RemoveContainer" containerID="185e539d51d81655454c6d0375198a7b0b250642ed353a2c5be01f3c12d55e7d" Nov 24 11:40:07 crc kubenswrapper[4678]: E1124 11:40:07.018237 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"185e539d51d81655454c6d0375198a7b0b250642ed353a2c5be01f3c12d55e7d\": container with ID starting with 185e539d51d81655454c6d0375198a7b0b250642ed353a2c5be01f3c12d55e7d not found: ID does not exist" containerID="185e539d51d81655454c6d0375198a7b0b250642ed353a2c5be01f3c12d55e7d" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.018257 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"185e539d51d81655454c6d0375198a7b0b250642ed353a2c5be01f3c12d55e7d"} err="failed to get container status \"185e539d51d81655454c6d0375198a7b0b250642ed353a2c5be01f3c12d55e7d\": rpc error: code = NotFound desc = could not find container \"185e539d51d81655454c6d0375198a7b0b250642ed353a2c5be01f3c12d55e7d\": container with ID starting with 185e539d51d81655454c6d0375198a7b0b250642ed353a2c5be01f3c12d55e7d not found: ID does not exist" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.018285 4678 scope.go:117] "RemoveContainer" containerID="30aa677da80e841cfdebe3eaad25c79f64b609b257b8c1f0771bd0a25ec932b9" Nov 24 11:40:07 crc kubenswrapper[4678]: E1124 11:40:07.018528 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30aa677da80e841cfdebe3eaad25c79f64b609b257b8c1f0771bd0a25ec932b9\": container with ID starting with 30aa677da80e841cfdebe3eaad25c79f64b609b257b8c1f0771bd0a25ec932b9 not found: ID does not exist" containerID="30aa677da80e841cfdebe3eaad25c79f64b609b257b8c1f0771bd0a25ec932b9" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.018546 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30aa677da80e841cfdebe3eaad25c79f64b609b257b8c1f0771bd0a25ec932b9"} err="failed to get container status \"30aa677da80e841cfdebe3eaad25c79f64b609b257b8c1f0771bd0a25ec932b9\": rpc error: code = NotFound desc = could not find container \"30aa677da80e841cfdebe3eaad25c79f64b609b257b8c1f0771bd0a25ec932b9\": container with ID starting with 30aa677da80e841cfdebe3eaad25c79f64b609b257b8c1f0771bd0a25ec932b9 not found: ID does not exist" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.048702 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.049147 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.051742 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.052993 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.165010 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.173351 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.198228 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:40:07 crc kubenswrapper[4678]: E1124 11:40:07.198854 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="proxy-httpd" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.198872 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="proxy-httpd" Nov 24 11:40:07 crc kubenswrapper[4678]: E1124 11:40:07.198894 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="sg-core" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.198902 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="sg-core" Nov 24 11:40:07 crc kubenswrapper[4678]: E1124 11:40:07.198933 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="ceilometer-notification-agent" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.198940 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="ceilometer-notification-agent" Nov 24 11:40:07 crc kubenswrapper[4678]: E1124 11:40:07.198962 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="ceilometer-central-agent" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.198969 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="ceilometer-central-agent" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.199203 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="ceilometer-notification-agent" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.199214 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="proxy-httpd" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.199225 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="ceilometer-central-agent" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.199235 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="68a77cae-074a-4561-9b14-16b07e793d63" containerName="sg-core" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.201483 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.204057 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.206084 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.207520 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.307029 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e466b610-ff64-4dcb-b5bd-b17a92d62b67-log-httpd\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.307320 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.307445 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e466b610-ff64-4dcb-b5bd-b17a92d62b67-run-httpd\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.307548 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-scripts\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.307684 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-config-data\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.307815 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24hr6\" (UniqueName: \"kubernetes.io/projected/e466b610-ff64-4dcb-b5bd-b17a92d62b67-kube-api-access-24hr6\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.308046 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.411408 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.411691 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e466b610-ff64-4dcb-b5bd-b17a92d62b67-log-httpd\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.412180 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e466b610-ff64-4dcb-b5bd-b17a92d62b67-log-httpd\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.412304 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.412379 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e466b610-ff64-4dcb-b5bd-b17a92d62b67-run-httpd\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.412448 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-scripts\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.412523 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-config-data\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.412628 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24hr6\" (UniqueName: \"kubernetes.io/projected/e466b610-ff64-4dcb-b5bd-b17a92d62b67-kube-api-access-24hr6\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.414823 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e466b610-ff64-4dcb-b5bd-b17a92d62b67-run-httpd\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.418548 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.419057 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.419200 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-scripts\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.422565 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-config-data\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.438151 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24hr6\" (UniqueName: \"kubernetes.io/projected/e466b610-ff64-4dcb-b5bd-b17a92d62b67-kube-api-access-24hr6\") pod \"ceilometer-0\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.522940 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.833450 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.836378 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.918454 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68a77cae-074a-4561-9b14-16b07e793d63" path="/var/lib/kubelet/pods/68a77cae-074a-4561-9b14-16b07e793d63/volumes" Nov 24 11:40:07 crc kubenswrapper[4678]: I1124 11:40:07.992768 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.059364 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-qlcnq"] Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.061564 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.078735 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-qlcnq"] Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.136001 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.136048 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.136105 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.136144 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk9h8\" (UniqueName: \"kubernetes.io/projected/d024ef08-351c-46f1-a000-8e6803d52572-kube-api-access-sk9h8\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.136177 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.136226 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-config\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.237714 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-config\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.237864 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.237894 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.237962 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.238013 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk9h8\" (UniqueName: \"kubernetes.io/projected/d024ef08-351c-46f1-a000-8e6803d52572-kube-api-access-sk9h8\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.238062 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.239182 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.240264 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-config\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.241063 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.242316 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.243380 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.267914 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk9h8\" (UniqueName: \"kubernetes.io/projected/d024ef08-351c-46f1-a000-8e6803d52572-kube-api-access-sk9h8\") pod \"dnsmasq-dns-79b5d74c8c-qlcnq\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.388009 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.863722 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e466b610-ff64-4dcb-b5bd-b17a92d62b67","Type":"ContainerStarted","Data":"0b1946a6b40f25c919ac01c2e51104299d952f355ef8119f7227f8151c5312d5"} Nov 24 11:40:08 crc kubenswrapper[4678]: I1124 11:40:08.983775 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-qlcnq"] Nov 24 11:40:09 crc kubenswrapper[4678]: I1124 11:40:09.881699 4678 generic.go:334] "Generic (PLEG): container finished" podID="d024ef08-351c-46f1-a000-8e6803d52572" containerID="c59cfe0972a8476e5fcde0aca5f23f90644c9a9799dbd8fe61b53c39632194cb" exitCode=0 Nov 24 11:40:09 crc kubenswrapper[4678]: I1124 11:40:09.882103 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" event={"ID":"d024ef08-351c-46f1-a000-8e6803d52572","Type":"ContainerDied","Data":"c59cfe0972a8476e5fcde0aca5f23f90644c9a9799dbd8fe61b53c39632194cb"} Nov 24 11:40:09 crc kubenswrapper[4678]: I1124 11:40:09.882139 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" event={"ID":"d024ef08-351c-46f1-a000-8e6803d52572","Type":"ContainerStarted","Data":"3547d333a1b8f8be0b04d2f62eec6c86479a32bfde5577d1745ae72f46479294"} Nov 24 11:40:09 crc kubenswrapper[4678]: I1124 11:40:09.889618 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e466b610-ff64-4dcb-b5bd-b17a92d62b67","Type":"ContainerStarted","Data":"6e08aec007a909ac0a30476a92c8c935e0b3a65d7658224d67671f5882d22fdf"} Nov 24 11:40:09 crc kubenswrapper[4678]: I1124 11:40:09.889686 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e466b610-ff64-4dcb-b5bd-b17a92d62b67","Type":"ContainerStarted","Data":"03f8acdc2bdebf73c98b0d34175b366b0de08f5d51e0a2e6caff919b215b5285"} Nov 24 11:40:10 crc kubenswrapper[4678]: I1124 11:40:10.124957 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:40:10 crc kubenswrapper[4678]: I1124 11:40:10.462935 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:10 crc kubenswrapper[4678]: I1124 11:40:10.654891 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:40:10 crc kubenswrapper[4678]: I1124 11:40:10.907537 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e466b610-ff64-4dcb-b5bd-b17a92d62b67","Type":"ContainerStarted","Data":"d3f559e3fbbe3d4de678edac2727433511cd97c190a2a00593753adb461937b6"} Nov 24 11:40:10 crc kubenswrapper[4678]: I1124 11:40:10.912182 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" event={"ID":"d024ef08-351c-46f1-a000-8e6803d52572","Type":"ContainerStarted","Data":"ac880874918e97a5bcb7ccd306cc6c3909f3c1d4d60dd3b522a96b77c4574fe7"} Nov 24 11:40:10 crc kubenswrapper[4678]: I1124 11:40:10.912332 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" containerName="nova-api-log" containerID="cri-o://c575075d7211e590cbab830e1f6c7d550e6e5dbad978862e641bb1035eb87bd3" gracePeriod=30 Nov 24 11:40:10 crc kubenswrapper[4678]: I1124 11:40:10.912373 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" containerName="nova-api-api" containerID="cri-o://f9fda07d71c01d8f8da65391cd714450ac3834e7ac70a729dda1eaebe66da912" gracePeriod=30 Nov 24 11:40:10 crc kubenswrapper[4678]: I1124 11:40:10.941807 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" podStartSLOduration=2.941785086 podStartE2EDuration="2.941785086s" podCreationTimestamp="2025-11-24 11:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:40:10.934761128 +0000 UTC m=+1421.865820767" watchObservedRunningTime="2025-11-24 11:40:10.941785086 +0000 UTC m=+1421.872844725" Nov 24 11:40:11 crc kubenswrapper[4678]: I1124 11:40:11.935268 4678 generic.go:334] "Generic (PLEG): container finished" podID="2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" containerID="c575075d7211e590cbab830e1f6c7d550e6e5dbad978862e641bb1035eb87bd3" exitCode=143 Nov 24 11:40:11 crc kubenswrapper[4678]: I1124 11:40:11.977063 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:11 crc kubenswrapper[4678]: I1124 11:40:11.977144 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9","Type":"ContainerDied","Data":"c575075d7211e590cbab830e1f6c7d550e6e5dbad978862e641bb1035eb87bd3"} Nov 24 11:40:12 crc kubenswrapper[4678]: I1124 11:40:12.954209 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e466b610-ff64-4dcb-b5bd-b17a92d62b67","Type":"ContainerStarted","Data":"e1711f0e471f96f06bc6ff7f5d6282a61c02560e60b32a117971ed31dd28c4ca"} Nov 24 11:40:12 crc kubenswrapper[4678]: I1124 11:40:12.954505 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerName="ceilometer-notification-agent" containerID="cri-o://03f8acdc2bdebf73c98b0d34175b366b0de08f5d51e0a2e6caff919b215b5285" gracePeriod=30 Nov 24 11:40:12 crc kubenswrapper[4678]: I1124 11:40:12.954354 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerName="ceilometer-central-agent" containerID="cri-o://6e08aec007a909ac0a30476a92c8c935e0b3a65d7658224d67671f5882d22fdf" gracePeriod=30 Nov 24 11:40:12 crc kubenswrapper[4678]: I1124 11:40:12.954422 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerName="sg-core" containerID="cri-o://d3f559e3fbbe3d4de678edac2727433511cd97c190a2a00593753adb461937b6" gracePeriod=30 Nov 24 11:40:12 crc kubenswrapper[4678]: I1124 11:40:12.954486 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerName="proxy-httpd" containerID="cri-o://e1711f0e471f96f06bc6ff7f5d6282a61c02560e60b32a117971ed31dd28c4ca" gracePeriod=30 Nov 24 11:40:12 crc kubenswrapper[4678]: I1124 11:40:12.954701 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:40:12 crc kubenswrapper[4678]: I1124 11:40:12.984875 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.356340996 podStartE2EDuration="5.984853463s" podCreationTimestamp="2025-11-24 11:40:07 +0000 UTC" firstStartedPulling="2025-11-24 11:40:08.00545174 +0000 UTC m=+1418.936511379" lastFinishedPulling="2025-11-24 11:40:11.633964217 +0000 UTC m=+1422.565023846" observedRunningTime="2025-11-24 11:40:12.982781217 +0000 UTC m=+1423.913840876" watchObservedRunningTime="2025-11-24 11:40:12.984853463 +0000 UTC m=+1423.915913102" Nov 24 11:40:13 crc kubenswrapper[4678]: I1124 11:40:13.966474 4678 generic.go:334] "Generic (PLEG): container finished" podID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerID="e1711f0e471f96f06bc6ff7f5d6282a61c02560e60b32a117971ed31dd28c4ca" exitCode=0 Nov 24 11:40:13 crc kubenswrapper[4678]: I1124 11:40:13.966770 4678 generic.go:334] "Generic (PLEG): container finished" podID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerID="d3f559e3fbbe3d4de678edac2727433511cd97c190a2a00593753adb461937b6" exitCode=2 Nov 24 11:40:13 crc kubenswrapper[4678]: I1124 11:40:13.966778 4678 generic.go:334] "Generic (PLEG): container finished" podID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerID="03f8acdc2bdebf73c98b0d34175b366b0de08f5d51e0a2e6caff919b215b5285" exitCode=0 Nov 24 11:40:13 crc kubenswrapper[4678]: I1124 11:40:13.966575 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e466b610-ff64-4dcb-b5bd-b17a92d62b67","Type":"ContainerDied","Data":"e1711f0e471f96f06bc6ff7f5d6282a61c02560e60b32a117971ed31dd28c4ca"} Nov 24 11:40:13 crc kubenswrapper[4678]: I1124 11:40:13.966818 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e466b610-ff64-4dcb-b5bd-b17a92d62b67","Type":"ContainerDied","Data":"d3f559e3fbbe3d4de678edac2727433511cd97c190a2a00593753adb461937b6"} Nov 24 11:40:13 crc kubenswrapper[4678]: I1124 11:40:13.966833 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e466b610-ff64-4dcb-b5bd-b17a92d62b67","Type":"ContainerDied","Data":"03f8acdc2bdebf73c98b0d34175b366b0de08f5d51e0a2e6caff919b215b5285"} Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.640183 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.734162 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-combined-ca-bundle\") pod \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\" (UID: \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\") " Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.734228 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x7nw\" (UniqueName: \"kubernetes.io/projected/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-kube-api-access-6x7nw\") pod \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\" (UID: \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\") " Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.734308 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-logs\") pod \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\" (UID: \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\") " Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.734418 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-config-data\") pod \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\" (UID: \"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9\") " Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.737235 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-logs" (OuterVolumeSpecName: "logs") pod "2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" (UID: "2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.741505 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-kube-api-access-6x7nw" (OuterVolumeSpecName: "kube-api-access-6x7nw") pod "2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" (UID: "2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9"). InnerVolumeSpecName "kube-api-access-6x7nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.785187 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" (UID: "2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.829410 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-config-data" (OuterVolumeSpecName: "config-data") pod "2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" (UID: "2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.842346 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.842377 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.842390 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x7nw\" (UniqueName: \"kubernetes.io/projected/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-kube-api-access-6x7nw\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.842399 4678 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.982490 4678 generic.go:334] "Generic (PLEG): container finished" podID="2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" containerID="f9fda07d71c01d8f8da65391cd714450ac3834e7ac70a729dda1eaebe66da912" exitCode=0 Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.982557 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.982584 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9","Type":"ContainerDied","Data":"f9fda07d71c01d8f8da65391cd714450ac3834e7ac70a729dda1eaebe66da912"} Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.982637 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9","Type":"ContainerDied","Data":"f89038e0780f006cf40def0f50436d3ee888dfd71ef4b83479cecb3c3ec16c4f"} Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.982661 4678 scope.go:117] "RemoveContainer" containerID="f9fda07d71c01d8f8da65391cd714450ac3834e7ac70a729dda1eaebe66da912" Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.988451 4678 generic.go:334] "Generic (PLEG): container finished" podID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerID="6e08aec007a909ac0a30476a92c8c935e0b3a65d7658224d67671f5882d22fdf" exitCode=0 Nov 24 11:40:14 crc kubenswrapper[4678]: I1124 11:40:14.988504 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e466b610-ff64-4dcb-b5bd-b17a92d62b67","Type":"ContainerDied","Data":"6e08aec007a909ac0a30476a92c8c935e0b3a65d7658224d67671f5882d22fdf"} Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.032790 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.056271 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.096118 4678 scope.go:117] "RemoveContainer" containerID="c575075d7211e590cbab830e1f6c7d550e6e5dbad978862e641bb1035eb87bd3" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.102049 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 11:40:15 crc kubenswrapper[4678]: E1124 11:40:15.102846 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" containerName="nova-api-log" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.102868 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" containerName="nova-api-log" Nov 24 11:40:15 crc kubenswrapper[4678]: E1124 11:40:15.102903 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" containerName="nova-api-api" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.102910 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" containerName="nova-api-api" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.103204 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" containerName="nova-api-log" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.103229 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" containerName="nova-api-api" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.105017 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.108567 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.108815 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.109562 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.123654 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.146295 4678 scope.go:117] "RemoveContainer" containerID="f9fda07d71c01d8f8da65391cd714450ac3834e7ac70a729dda1eaebe66da912" Nov 24 11:40:15 crc kubenswrapper[4678]: E1124 11:40:15.146758 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9fda07d71c01d8f8da65391cd714450ac3834e7ac70a729dda1eaebe66da912\": container with ID starting with f9fda07d71c01d8f8da65391cd714450ac3834e7ac70a729dda1eaebe66da912 not found: ID does not exist" containerID="f9fda07d71c01d8f8da65391cd714450ac3834e7ac70a729dda1eaebe66da912" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.146788 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9fda07d71c01d8f8da65391cd714450ac3834e7ac70a729dda1eaebe66da912"} err="failed to get container status \"f9fda07d71c01d8f8da65391cd714450ac3834e7ac70a729dda1eaebe66da912\": rpc error: code = NotFound desc = could not find container \"f9fda07d71c01d8f8da65391cd714450ac3834e7ac70a729dda1eaebe66da912\": container with ID starting with f9fda07d71c01d8f8da65391cd714450ac3834e7ac70a729dda1eaebe66da912 not found: ID does not exist" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.146815 4678 scope.go:117] "RemoveContainer" containerID="c575075d7211e590cbab830e1f6c7d550e6e5dbad978862e641bb1035eb87bd3" Nov 24 11:40:15 crc kubenswrapper[4678]: E1124 11:40:15.147104 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c575075d7211e590cbab830e1f6c7d550e6e5dbad978862e641bb1035eb87bd3\": container with ID starting with c575075d7211e590cbab830e1f6c7d550e6e5dbad978862e641bb1035eb87bd3 not found: ID does not exist" containerID="c575075d7211e590cbab830e1f6c7d550e6e5dbad978862e641bb1035eb87bd3" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.147152 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c575075d7211e590cbab830e1f6c7d550e6e5dbad978862e641bb1035eb87bd3"} err="failed to get container status \"c575075d7211e590cbab830e1f6c7d550e6e5dbad978862e641bb1035eb87bd3\": rpc error: code = NotFound desc = could not find container \"c575075d7211e590cbab830e1f6c7d550e6e5dbad978862e641bb1035eb87bd3\": container with ID starting with c575075d7211e590cbab830e1f6c7d550e6e5dbad978862e641bb1035eb87bd3 not found: ID does not exist" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.255156 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-public-tls-certs\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.255243 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.255425 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5e41e21-1a4c-4077-99a5-fae558577594-logs\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.255450 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.255474 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-config-data\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.255511 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc4tv\" (UniqueName: \"kubernetes.io/projected/b5e41e21-1a4c-4077-99a5-fae558577594-kube-api-access-qc4tv\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.362438 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5e41e21-1a4c-4077-99a5-fae558577594-logs\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.362507 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.362533 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-config-data\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.362582 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc4tv\" (UniqueName: \"kubernetes.io/projected/b5e41e21-1a4c-4077-99a5-fae558577594-kube-api-access-qc4tv\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.362772 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-public-tls-certs\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.362799 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.363345 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5e41e21-1a4c-4077-99a5-fae558577594-logs\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.373630 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.373734 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-config-data\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.396991 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.402528 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc4tv\" (UniqueName: \"kubernetes.io/projected/b5e41e21-1a4c-4077-99a5-fae558577594-kube-api-access-qc4tv\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.418263 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-public-tls-certs\") pod \"nova-api-0\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.436261 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.467216 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.672919 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.763121 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.778841 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-combined-ca-bundle\") pod \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.778890 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24hr6\" (UniqueName: \"kubernetes.io/projected/e466b610-ff64-4dcb-b5bd-b17a92d62b67-kube-api-access-24hr6\") pod \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.779063 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-config-data\") pod \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.779111 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e466b610-ff64-4dcb-b5bd-b17a92d62b67-run-httpd\") pod \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.779145 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-scripts\") pod \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.779227 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-sg-core-conf-yaml\") pod \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.779359 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e466b610-ff64-4dcb-b5bd-b17a92d62b67-log-httpd\") pod \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\" (UID: \"e466b610-ff64-4dcb-b5bd-b17a92d62b67\") " Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.780862 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e466b610-ff64-4dcb-b5bd-b17a92d62b67-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e466b610-ff64-4dcb-b5bd-b17a92d62b67" (UID: "e466b610-ff64-4dcb-b5bd-b17a92d62b67"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.784209 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e466b610-ff64-4dcb-b5bd-b17a92d62b67-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e466b610-ff64-4dcb-b5bd-b17a92d62b67" (UID: "e466b610-ff64-4dcb-b5bd-b17a92d62b67"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.790455 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-scripts" (OuterVolumeSpecName: "scripts") pod "e466b610-ff64-4dcb-b5bd-b17a92d62b67" (UID: "e466b610-ff64-4dcb-b5bd-b17a92d62b67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.790470 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e466b610-ff64-4dcb-b5bd-b17a92d62b67-kube-api-access-24hr6" (OuterVolumeSpecName: "kube-api-access-24hr6") pod "e466b610-ff64-4dcb-b5bd-b17a92d62b67" (UID: "e466b610-ff64-4dcb-b5bd-b17a92d62b67"). InnerVolumeSpecName "kube-api-access-24hr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.880947 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e466b610-ff64-4dcb-b5bd-b17a92d62b67" (UID: "e466b610-ff64-4dcb-b5bd-b17a92d62b67"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.882288 4678 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e466b610-ff64-4dcb-b5bd-b17a92d62b67-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.882327 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.882340 4678 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.882353 4678 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e466b610-ff64-4dcb-b5bd-b17a92d62b67-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.882365 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24hr6\" (UniqueName: \"kubernetes.io/projected/e466b610-ff64-4dcb-b5bd-b17a92d62b67-kube-api-access-24hr6\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.925256 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9" path="/var/lib/kubelet/pods/2e4f7ec7-ac0a-40ef-8e15-4ac708aad7d9/volumes" Nov 24 11:40:15 crc kubenswrapper[4678]: I1124 11:40:15.993069 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-config-data" (OuterVolumeSpecName: "config-data") pod "e466b610-ff64-4dcb-b5bd-b17a92d62b67" (UID: "e466b610-ff64-4dcb-b5bd-b17a92d62b67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.004384 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e466b610-ff64-4dcb-b5bd-b17a92d62b67" (UID: "e466b610-ff64-4dcb-b5bd-b17a92d62b67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.008039 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e466b610-ff64-4dcb-b5bd-b17a92d62b67","Type":"ContainerDied","Data":"0b1946a6b40f25c919ac01c2e51104299d952f355ef8119f7227f8151c5312d5"} Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.008069 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.009145 4678 scope.go:117] "RemoveContainer" containerID="e1711f0e471f96f06bc6ff7f5d6282a61c02560e60b32a117971ed31dd28c4ca" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.041365 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.050294 4678 scope.go:117] "RemoveContainer" containerID="d3f559e3fbbe3d4de678edac2727433511cd97c190a2a00593753adb461937b6" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.058807 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.089141 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.089185 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e466b610-ff64-4dcb-b5bd-b17a92d62b67-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.089577 4678 scope.go:117] "RemoveContainer" containerID="03f8acdc2bdebf73c98b0d34175b366b0de08f5d51e0a2e6caff919b215b5285" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.100144 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.129838 4678 scope.go:117] "RemoveContainer" containerID="6e08aec007a909ac0a30476a92c8c935e0b3a65d7658224d67671f5882d22fdf" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.129969 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:40:16 crc kubenswrapper[4678]: E1124 11:40:16.130606 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerName="ceilometer-central-agent" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.130628 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerName="ceilometer-central-agent" Nov 24 11:40:16 crc kubenswrapper[4678]: E1124 11:40:16.130711 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerName="proxy-httpd" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.130721 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerName="proxy-httpd" Nov 24 11:40:16 crc kubenswrapper[4678]: E1124 11:40:16.130736 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerName="sg-core" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.130743 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerName="sg-core" Nov 24 11:40:16 crc kubenswrapper[4678]: E1124 11:40:16.130773 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerName="ceilometer-notification-agent" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.130781 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerName="ceilometer-notification-agent" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.131042 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerName="sg-core" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.131074 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerName="ceilometer-notification-agent" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.131104 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerName="ceilometer-central-agent" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.131120 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" containerName="proxy-httpd" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.133630 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.151315 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.151519 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.167963 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.184032 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.204385 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63bc3a21-7960-4c56-8967-c43986fc8b05-log-httpd\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.204708 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.204796 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-config-data\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.204837 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hshg5\" (UniqueName: \"kubernetes.io/projected/63bc3a21-7960-4c56-8967-c43986fc8b05-kube-api-access-hshg5\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.205022 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-scripts\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.205096 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.205131 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63bc3a21-7960-4c56-8967-c43986fc8b05-run-httpd\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.280251 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-qn8sk"] Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.281712 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qn8sk" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.290239 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.290461 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.293810 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qn8sk"] Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.307800 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-config-data\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.307886 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hshg5\" (UniqueName: \"kubernetes.io/projected/63bc3a21-7960-4c56-8967-c43986fc8b05-kube-api-access-hshg5\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.308008 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-scripts\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.308061 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.308094 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63bc3a21-7960-4c56-8967-c43986fc8b05-run-httpd\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.308149 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63bc3a21-7960-4c56-8967-c43986fc8b05-log-httpd\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.308259 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.314535 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.317769 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-config-data\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.318069 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63bc3a21-7960-4c56-8967-c43986fc8b05-run-httpd\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.318484 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63bc3a21-7960-4c56-8967-c43986fc8b05-log-httpd\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.322117 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-scripts\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.332298 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.338031 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hshg5\" (UniqueName: \"kubernetes.io/projected/63bc3a21-7960-4c56-8967-c43986fc8b05-kube-api-access-hshg5\") pod \"ceilometer-0\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.411325 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-config-data\") pod \"nova-cell1-cell-mapping-qn8sk\" (UID: \"9d4c80df-952c-4b91-9957-5629417ef13a\") " pod="openstack/nova-cell1-cell-mapping-qn8sk" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.411423 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzwc4\" (UniqueName: \"kubernetes.io/projected/9d4c80df-952c-4b91-9957-5629417ef13a-kube-api-access-mzwc4\") pod \"nova-cell1-cell-mapping-qn8sk\" (UID: \"9d4c80df-952c-4b91-9957-5629417ef13a\") " pod="openstack/nova-cell1-cell-mapping-qn8sk" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.411454 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qn8sk\" (UID: \"9d4c80df-952c-4b91-9957-5629417ef13a\") " pod="openstack/nova-cell1-cell-mapping-qn8sk" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.411575 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-scripts\") pod \"nova-cell1-cell-mapping-qn8sk\" (UID: \"9d4c80df-952c-4b91-9957-5629417ef13a\") " pod="openstack/nova-cell1-cell-mapping-qn8sk" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.513649 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qn8sk\" (UID: \"9d4c80df-952c-4b91-9957-5629417ef13a\") " pod="openstack/nova-cell1-cell-mapping-qn8sk" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.513818 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-scripts\") pod \"nova-cell1-cell-mapping-qn8sk\" (UID: \"9d4c80df-952c-4b91-9957-5629417ef13a\") " pod="openstack/nova-cell1-cell-mapping-qn8sk" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.513902 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-config-data\") pod \"nova-cell1-cell-mapping-qn8sk\" (UID: \"9d4c80df-952c-4b91-9957-5629417ef13a\") " pod="openstack/nova-cell1-cell-mapping-qn8sk" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.513956 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzwc4\" (UniqueName: \"kubernetes.io/projected/9d4c80df-952c-4b91-9957-5629417ef13a-kube-api-access-mzwc4\") pod \"nova-cell1-cell-mapping-qn8sk\" (UID: \"9d4c80df-952c-4b91-9957-5629417ef13a\") " pod="openstack/nova-cell1-cell-mapping-qn8sk" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.519820 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-scripts\") pod \"nova-cell1-cell-mapping-qn8sk\" (UID: \"9d4c80df-952c-4b91-9957-5629417ef13a\") " pod="openstack/nova-cell1-cell-mapping-qn8sk" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.520126 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-config-data\") pod \"nova-cell1-cell-mapping-qn8sk\" (UID: \"9d4c80df-952c-4b91-9957-5629417ef13a\") " pod="openstack/nova-cell1-cell-mapping-qn8sk" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.520451 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qn8sk\" (UID: \"9d4c80df-952c-4b91-9957-5629417ef13a\") " pod="openstack/nova-cell1-cell-mapping-qn8sk" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.521802 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.536256 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzwc4\" (UniqueName: \"kubernetes.io/projected/9d4c80df-952c-4b91-9957-5629417ef13a-kube-api-access-mzwc4\") pod \"nova-cell1-cell-mapping-qn8sk\" (UID: \"9d4c80df-952c-4b91-9957-5629417ef13a\") " pod="openstack/nova-cell1-cell-mapping-qn8sk" Nov 24 11:40:16 crc kubenswrapper[4678]: I1124 11:40:16.833794 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qn8sk" Nov 24 11:40:17 crc kubenswrapper[4678]: I1124 11:40:17.057093 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:40:17 crc kubenswrapper[4678]: I1124 11:40:17.068662 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5e41e21-1a4c-4077-99a5-fae558577594","Type":"ContainerStarted","Data":"c6955fa73a09f11d0b4f513fb260643f17e2ae596ea4d4dca2cdec8131d0c879"} Nov 24 11:40:17 crc kubenswrapper[4678]: I1124 11:40:17.068753 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5e41e21-1a4c-4077-99a5-fae558577594","Type":"ContainerStarted","Data":"99d7583c232800315134d1afa1eb2f19b33ba8aaf7e24064d6bf7f604c2f2a90"} Nov 24 11:40:17 crc kubenswrapper[4678]: I1124 11:40:17.068766 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5e41e21-1a4c-4077-99a5-fae558577594","Type":"ContainerStarted","Data":"6cadee65947635b47c436ffcc173c218026d411abe0b0d2668f683fc82e5192d"} Nov 24 11:40:17 crc kubenswrapper[4678]: I1124 11:40:17.113035 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.112993001 podStartE2EDuration="2.112993001s" podCreationTimestamp="2025-11-24 11:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:40:17.093033477 +0000 UTC m=+1428.024093116" watchObservedRunningTime="2025-11-24 11:40:17.112993001 +0000 UTC m=+1428.044052650" Nov 24 11:40:17 crc kubenswrapper[4678]: I1124 11:40:17.457116 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qn8sk"] Nov 24 11:40:17 crc kubenswrapper[4678]: I1124 11:40:17.910716 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e466b610-ff64-4dcb-b5bd-b17a92d62b67" path="/var/lib/kubelet/pods/e466b610-ff64-4dcb-b5bd-b17a92d62b67/volumes" Nov 24 11:40:18 crc kubenswrapper[4678]: I1124 11:40:18.083517 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63bc3a21-7960-4c56-8967-c43986fc8b05","Type":"ContainerStarted","Data":"7dc534d8b3a884ab52f5f27c1743263c94e0aed96ed946dacd23fde4a7a943f9"} Nov 24 11:40:18 crc kubenswrapper[4678]: I1124 11:40:18.083571 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63bc3a21-7960-4c56-8967-c43986fc8b05","Type":"ContainerStarted","Data":"40a6ab284cea15b3b9be35e55cbdec0ab1e4220b5c69ce63b18a329c0497a0b9"} Nov 24 11:40:18 crc kubenswrapper[4678]: I1124 11:40:18.085117 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qn8sk" event={"ID":"9d4c80df-952c-4b91-9957-5629417ef13a","Type":"ContainerStarted","Data":"9a9c24edc7320c63a99e052fc4a677b5f4235aa9df14f2e71abc1cd7c87f36b8"} Nov 24 11:40:18 crc kubenswrapper[4678]: I1124 11:40:18.085153 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qn8sk" event={"ID":"9d4c80df-952c-4b91-9957-5629417ef13a","Type":"ContainerStarted","Data":"6614abf0fa1cbfc3b58ebc2dc70beabb2de1fb23185d2f8d4b7b75ed02b5a772"} Nov 24 11:40:18 crc kubenswrapper[4678]: I1124 11:40:18.110314 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-qn8sk" podStartSLOduration=2.110295889 podStartE2EDuration="2.110295889s" podCreationTimestamp="2025-11-24 11:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:40:18.101989937 +0000 UTC m=+1429.033049576" watchObservedRunningTime="2025-11-24 11:40:18.110295889 +0000 UTC m=+1429.041355528" Nov 24 11:40:18 crc kubenswrapper[4678]: I1124 11:40:18.389847 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:40:18 crc kubenswrapper[4678]: I1124 11:40:18.470771 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-tltnp"] Nov 24 11:40:18 crc kubenswrapper[4678]: I1124 11:40:18.471462 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" podUID="a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3" containerName="dnsmasq-dns" containerID="cri-o://8132111ec530ab0cfc3c88d6845ed1b264ade70a007a85f7dfc5c84046f38aad" gracePeriod=10 Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.146949 4678 generic.go:334] "Generic (PLEG): container finished" podID="a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3" containerID="8132111ec530ab0cfc3c88d6845ed1b264ade70a007a85f7dfc5c84046f38aad" exitCode=0 Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.147055 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" event={"ID":"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3","Type":"ContainerDied","Data":"8132111ec530ab0cfc3c88d6845ed1b264ade70a007a85f7dfc5c84046f38aad"} Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.278983 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.425776 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g85zn\" (UniqueName: \"kubernetes.io/projected/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-kube-api-access-g85zn\") pod \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.426057 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-dns-swift-storage-0\") pod \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.426086 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-ovsdbserver-nb\") pod \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.426125 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-ovsdbserver-sb\") pod \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.426379 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-dns-svc\") pod \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.426431 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-config\") pod \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\" (UID: \"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3\") " Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.436152 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-kube-api-access-g85zn" (OuterVolumeSpecName: "kube-api-access-g85zn") pod "a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3" (UID: "a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3"). InnerVolumeSpecName "kube-api-access-g85zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.530362 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3" (UID: "a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.540692 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.540755 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g85zn\" (UniqueName: \"kubernetes.io/projected/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-kube-api-access-g85zn\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.549095 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3" (UID: "a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.556250 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3" (UID: "a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.556512 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-config" (OuterVolumeSpecName: "config") pod "a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3" (UID: "a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.583152 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3" (UID: "a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.644091 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.644137 4678 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.644155 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:19 crc kubenswrapper[4678]: I1124 11:40:19.644168 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:20 crc kubenswrapper[4678]: I1124 11:40:20.162919 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63bc3a21-7960-4c56-8967-c43986fc8b05","Type":"ContainerStarted","Data":"359d073d4e549827de8ab778b1d1c985ed22f21bc18a6ea5571e5dc54b59581b"} Nov 24 11:40:20 crc kubenswrapper[4678]: I1124 11:40:20.165586 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" event={"ID":"a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3","Type":"ContainerDied","Data":"ee6d1d1540d7b2ccce21763a093b9a59b05293a0e974ad580e4872a07dab9b5c"} Nov 24 11:40:20 crc kubenswrapper[4678]: I1124 11:40:20.165638 4678 scope.go:117] "RemoveContainer" containerID="8132111ec530ab0cfc3c88d6845ed1b264ade70a007a85f7dfc5c84046f38aad" Nov 24 11:40:20 crc kubenswrapper[4678]: I1124 11:40:20.165892 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-tltnp" Nov 24 11:40:20 crc kubenswrapper[4678]: I1124 11:40:20.209748 4678 scope.go:117] "RemoveContainer" containerID="89e1fc15624953c6271982051e9e822b38de30c6c963a2abc003b626e9f02ef7" Nov 24 11:40:20 crc kubenswrapper[4678]: I1124 11:40:20.260257 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-tltnp"] Nov 24 11:40:20 crc kubenswrapper[4678]: I1124 11:40:20.284972 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-tltnp"] Nov 24 11:40:21 crc kubenswrapper[4678]: I1124 11:40:21.181702 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63bc3a21-7960-4c56-8967-c43986fc8b05","Type":"ContainerStarted","Data":"d76b0a2e4b95a70b50112bbeaf45b4946fcda9d416a53e9cc70f4e8651981102"} Nov 24 11:40:21 crc kubenswrapper[4678]: I1124 11:40:21.918010 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3" path="/var/lib/kubelet/pods/a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3/volumes" Nov 24 11:40:22 crc kubenswrapper[4678]: I1124 11:40:22.205723 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63bc3a21-7960-4c56-8967-c43986fc8b05","Type":"ContainerStarted","Data":"57d137219185175453c9359f39ae4ed11ba97cf8ca59d81b5e43f6a0b5bdb9da"} Nov 24 11:40:22 crc kubenswrapper[4678]: I1124 11:40:22.205901 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:40:22 crc kubenswrapper[4678]: I1124 11:40:22.247079 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.946086064 podStartE2EDuration="6.247062363s" podCreationTimestamp="2025-11-24 11:40:16 +0000 UTC" firstStartedPulling="2025-11-24 11:40:17.070603678 +0000 UTC m=+1428.001663317" lastFinishedPulling="2025-11-24 11:40:21.371579937 +0000 UTC m=+1432.302639616" observedRunningTime="2025-11-24 11:40:22.235828162 +0000 UTC m=+1433.166887801" watchObservedRunningTime="2025-11-24 11:40:22.247062363 +0000 UTC m=+1433.178122002" Nov 24 11:40:23 crc kubenswrapper[4678]: I1124 11:40:23.221538 4678 generic.go:334] "Generic (PLEG): container finished" podID="9d4c80df-952c-4b91-9957-5629417ef13a" containerID="9a9c24edc7320c63a99e052fc4a677b5f4235aa9df14f2e71abc1cd7c87f36b8" exitCode=0 Nov 24 11:40:23 crc kubenswrapper[4678]: I1124 11:40:23.222120 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qn8sk" event={"ID":"9d4c80df-952c-4b91-9957-5629417ef13a","Type":"ContainerDied","Data":"9a9c24edc7320c63a99e052fc4a677b5f4235aa9df14f2e71abc1cd7c87f36b8"} Nov 24 11:40:24 crc kubenswrapper[4678]: I1124 11:40:24.708402 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qn8sk" Nov 24 11:40:24 crc kubenswrapper[4678]: I1124 11:40:24.908431 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-scripts\") pod \"9d4c80df-952c-4b91-9957-5629417ef13a\" (UID: \"9d4c80df-952c-4b91-9957-5629417ef13a\") " Nov 24 11:40:24 crc kubenswrapper[4678]: I1124 11:40:24.908509 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzwc4\" (UniqueName: \"kubernetes.io/projected/9d4c80df-952c-4b91-9957-5629417ef13a-kube-api-access-mzwc4\") pod \"9d4c80df-952c-4b91-9957-5629417ef13a\" (UID: \"9d4c80df-952c-4b91-9957-5629417ef13a\") " Nov 24 11:40:24 crc kubenswrapper[4678]: I1124 11:40:24.908553 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-config-data\") pod \"9d4c80df-952c-4b91-9957-5629417ef13a\" (UID: \"9d4c80df-952c-4b91-9957-5629417ef13a\") " Nov 24 11:40:24 crc kubenswrapper[4678]: I1124 11:40:24.908771 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-combined-ca-bundle\") pod \"9d4c80df-952c-4b91-9957-5629417ef13a\" (UID: \"9d4c80df-952c-4b91-9957-5629417ef13a\") " Nov 24 11:40:24 crc kubenswrapper[4678]: I1124 11:40:24.921282 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-scripts" (OuterVolumeSpecName: "scripts") pod "9d4c80df-952c-4b91-9957-5629417ef13a" (UID: "9d4c80df-952c-4b91-9957-5629417ef13a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:24 crc kubenswrapper[4678]: I1124 11:40:24.923874 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4c80df-952c-4b91-9957-5629417ef13a-kube-api-access-mzwc4" (OuterVolumeSpecName: "kube-api-access-mzwc4") pod "9d4c80df-952c-4b91-9957-5629417ef13a" (UID: "9d4c80df-952c-4b91-9957-5629417ef13a"). InnerVolumeSpecName "kube-api-access-mzwc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:40:24 crc kubenswrapper[4678]: I1124 11:40:24.939845 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-config-data" (OuterVolumeSpecName: "config-data") pod "9d4c80df-952c-4b91-9957-5629417ef13a" (UID: "9d4c80df-952c-4b91-9957-5629417ef13a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:24 crc kubenswrapper[4678]: I1124 11:40:24.942180 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d4c80df-952c-4b91-9957-5629417ef13a" (UID: "9d4c80df-952c-4b91-9957-5629417ef13a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:25 crc kubenswrapper[4678]: I1124 11:40:25.010462 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:25 crc kubenswrapper[4678]: I1124 11:40:25.010506 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:25 crc kubenswrapper[4678]: I1124 11:40:25.010518 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzwc4\" (UniqueName: \"kubernetes.io/projected/9d4c80df-952c-4b91-9957-5629417ef13a-kube-api-access-mzwc4\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:25 crc kubenswrapper[4678]: I1124 11:40:25.010541 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d4c80df-952c-4b91-9957-5629417ef13a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:25 crc kubenswrapper[4678]: I1124 11:40:25.244423 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qn8sk" event={"ID":"9d4c80df-952c-4b91-9957-5629417ef13a","Type":"ContainerDied","Data":"6614abf0fa1cbfc3b58ebc2dc70beabb2de1fb23185d2f8d4b7b75ed02b5a772"} Nov 24 11:40:25 crc kubenswrapper[4678]: I1124 11:40:25.244461 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6614abf0fa1cbfc3b58ebc2dc70beabb2de1fb23185d2f8d4b7b75ed02b5a772" Nov 24 11:40:25 crc kubenswrapper[4678]: I1124 11:40:25.244468 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qn8sk" Nov 24 11:40:25 crc kubenswrapper[4678]: I1124 11:40:25.428457 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:40:25 crc kubenswrapper[4678]: I1124 11:40:25.428752 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b5e41e21-1a4c-4077-99a5-fae558577594" containerName="nova-api-log" containerID="cri-o://99d7583c232800315134d1afa1eb2f19b33ba8aaf7e24064d6bf7f604c2f2a90" gracePeriod=30 Nov 24 11:40:25 crc kubenswrapper[4678]: I1124 11:40:25.429055 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b5e41e21-1a4c-4077-99a5-fae558577594" containerName="nova-api-api" containerID="cri-o://c6955fa73a09f11d0b4f513fb260643f17e2ae596ea4d4dca2cdec8131d0c879" gracePeriod=30 Nov 24 11:40:25 crc kubenswrapper[4678]: I1124 11:40:25.446068 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:40:25 crc kubenswrapper[4678]: I1124 11:40:25.446278 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1e4b7173-e5ad-48ee-b578-4f67d6b0e832" containerName="nova-scheduler-scheduler" containerID="cri-o://7d667acb5cbb9eccf25d03dc036db657dcdae7e6f2b018bdf529e4f5ccadc2d1" gracePeriod=30 Nov 24 11:40:25 crc kubenswrapper[4678]: I1124 11:40:25.485136 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:40:25 crc kubenswrapper[4678]: I1124 11:40:25.485363 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="722acbe1-a292-43be-88ea-7759fb793035" containerName="nova-metadata-log" containerID="cri-o://a78ce56c9bd708c6bdcd654307e5537ea93beda341951abedaee7286bdaa1c2c" gracePeriod=30 Nov 24 11:40:25 crc kubenswrapper[4678]: I1124 11:40:25.485519 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="722acbe1-a292-43be-88ea-7759fb793035" containerName="nova-metadata-metadata" containerID="cri-o://12fc6a9d37660edcf33bcb38bc03b8d9d4f67ad6e1eaa11c48bfea5ea0176935" gracePeriod=30 Nov 24 11:40:25 crc kubenswrapper[4678]: E1124 11:40:25.893963 4678 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5e41e21_1a4c_4077_99a5_fae558577594.slice/crio-conmon-c6955fa73a09f11d0b4f513fb260643f17e2ae596ea4d4dca2cdec8131d0c879.scope\": RecentStats: unable to find data in memory cache]" Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.328957 4678 generic.go:334] "Generic (PLEG): container finished" podID="722acbe1-a292-43be-88ea-7759fb793035" containerID="a78ce56c9bd708c6bdcd654307e5537ea93beda341951abedaee7286bdaa1c2c" exitCode=143 Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.329235 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"722acbe1-a292-43be-88ea-7759fb793035","Type":"ContainerDied","Data":"a78ce56c9bd708c6bdcd654307e5537ea93beda341951abedaee7286bdaa1c2c"} Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.349833 4678 generic.go:334] "Generic (PLEG): container finished" podID="b5e41e21-1a4c-4077-99a5-fae558577594" containerID="c6955fa73a09f11d0b4f513fb260643f17e2ae596ea4d4dca2cdec8131d0c879" exitCode=0 Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.349870 4678 generic.go:334] "Generic (PLEG): container finished" podID="b5e41e21-1a4c-4077-99a5-fae558577594" containerID="99d7583c232800315134d1afa1eb2f19b33ba8aaf7e24064d6bf7f604c2f2a90" exitCode=143 Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.349891 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5e41e21-1a4c-4077-99a5-fae558577594","Type":"ContainerDied","Data":"c6955fa73a09f11d0b4f513fb260643f17e2ae596ea4d4dca2cdec8131d0c879"} Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.349915 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5e41e21-1a4c-4077-99a5-fae558577594","Type":"ContainerDied","Data":"99d7583c232800315134d1afa1eb2f19b33ba8aaf7e24064d6bf7f604c2f2a90"} Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.519387 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.566075 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5e41e21-1a4c-4077-99a5-fae558577594-logs\") pod \"b5e41e21-1a4c-4077-99a5-fae558577594\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.566212 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-public-tls-certs\") pod \"b5e41e21-1a4c-4077-99a5-fae558577594\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.566375 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc4tv\" (UniqueName: \"kubernetes.io/projected/b5e41e21-1a4c-4077-99a5-fae558577594-kube-api-access-qc4tv\") pod \"b5e41e21-1a4c-4077-99a5-fae558577594\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.566423 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-combined-ca-bundle\") pod \"b5e41e21-1a4c-4077-99a5-fae558577594\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.566520 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-internal-tls-certs\") pod \"b5e41e21-1a4c-4077-99a5-fae558577594\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.566521 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e41e21-1a4c-4077-99a5-fae558577594-logs" (OuterVolumeSpecName: "logs") pod "b5e41e21-1a4c-4077-99a5-fae558577594" (UID: "b5e41e21-1a4c-4077-99a5-fae558577594"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.566602 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-config-data\") pod \"b5e41e21-1a4c-4077-99a5-fae558577594\" (UID: \"b5e41e21-1a4c-4077-99a5-fae558577594\") " Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.567499 4678 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5e41e21-1a4c-4077-99a5-fae558577594-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.592297 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e41e21-1a4c-4077-99a5-fae558577594-kube-api-access-qc4tv" (OuterVolumeSpecName: "kube-api-access-qc4tv") pod "b5e41e21-1a4c-4077-99a5-fae558577594" (UID: "b5e41e21-1a4c-4077-99a5-fae558577594"). InnerVolumeSpecName "kube-api-access-qc4tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.614150 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-config-data" (OuterVolumeSpecName: "config-data") pod "b5e41e21-1a4c-4077-99a5-fae558577594" (UID: "b5e41e21-1a4c-4077-99a5-fae558577594"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.623041 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5e41e21-1a4c-4077-99a5-fae558577594" (UID: "b5e41e21-1a4c-4077-99a5-fae558577594"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.646651 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b5e41e21-1a4c-4077-99a5-fae558577594" (UID: "b5e41e21-1a4c-4077-99a5-fae558577594"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.648778 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b5e41e21-1a4c-4077-99a5-fae558577594" (UID: "b5e41e21-1a4c-4077-99a5-fae558577594"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.670152 4678 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.670194 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc4tv\" (UniqueName: \"kubernetes.io/projected/b5e41e21-1a4c-4077-99a5-fae558577594-kube-api-access-qc4tv\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.670206 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.670216 4678 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:26 crc kubenswrapper[4678]: I1124 11:40:26.670228 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e41e21-1a4c-4077-99a5-fae558577594-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:26 crc kubenswrapper[4678]: E1124 11:40:26.998588 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d667acb5cbb9eccf25d03dc036db657dcdae7e6f2b018bdf529e4f5ccadc2d1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 11:40:27 crc kubenswrapper[4678]: E1124 11:40:27.000583 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d667acb5cbb9eccf25d03dc036db657dcdae7e6f2b018bdf529e4f5ccadc2d1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 11:40:27 crc kubenswrapper[4678]: E1124 11:40:27.001679 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d667acb5cbb9eccf25d03dc036db657dcdae7e6f2b018bdf529e4f5ccadc2d1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 11:40:27 crc kubenswrapper[4678]: E1124 11:40:27.001713 4678 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1e4b7173-e5ad-48ee-b578-4f67d6b0e832" containerName="nova-scheduler-scheduler" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.366901 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5e41e21-1a4c-4077-99a5-fae558577594","Type":"ContainerDied","Data":"6cadee65947635b47c436ffcc173c218026d411abe0b0d2668f683fc82e5192d"} Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.367451 4678 scope.go:117] "RemoveContainer" containerID="c6955fa73a09f11d0b4f513fb260643f17e2ae596ea4d4dca2cdec8131d0c879" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.367641 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.441586 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.443360 4678 scope.go:117] "RemoveContainer" containerID="99d7583c232800315134d1afa1eb2f19b33ba8aaf7e24064d6bf7f604c2f2a90" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.464934 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.475487 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 11:40:27 crc kubenswrapper[4678]: E1124 11:40:27.476324 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3" containerName="init" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.476353 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3" containerName="init" Nov 24 11:40:27 crc kubenswrapper[4678]: E1124 11:40:27.476371 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3" containerName="dnsmasq-dns" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.476378 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3" containerName="dnsmasq-dns" Nov 24 11:40:27 crc kubenswrapper[4678]: E1124 11:40:27.476393 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e41e21-1a4c-4077-99a5-fae558577594" containerName="nova-api-api" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.476400 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e41e21-1a4c-4077-99a5-fae558577594" containerName="nova-api-api" Nov 24 11:40:27 crc kubenswrapper[4678]: E1124 11:40:27.476416 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4c80df-952c-4b91-9957-5629417ef13a" containerName="nova-manage" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.476424 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4c80df-952c-4b91-9957-5629417ef13a" containerName="nova-manage" Nov 24 11:40:27 crc kubenswrapper[4678]: E1124 11:40:27.476438 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e41e21-1a4c-4077-99a5-fae558577594" containerName="nova-api-log" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.476445 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e41e21-1a4c-4077-99a5-fae558577594" containerName="nova-api-log" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.476741 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6014b69-c2c2-4ebd-94d0-bbc0c5ecc0b3" containerName="dnsmasq-dns" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.476754 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e41e21-1a4c-4077-99a5-fae558577594" containerName="nova-api-log" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.476768 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e41e21-1a4c-4077-99a5-fae558577594" containerName="nova-api-api" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.476780 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d4c80df-952c-4b91-9957-5629417ef13a" containerName="nova-manage" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.485701 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.487953 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.488210 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.489295 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.490288 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.589821 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19cdd516-8b52-4b72-936c-37c619cda4a6-logs\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.590138 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19cdd516-8b52-4b72-936c-37c619cda4a6-public-tls-certs\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.590298 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhjxg\" (UniqueName: \"kubernetes.io/projected/19cdd516-8b52-4b72-936c-37c619cda4a6-kube-api-access-fhjxg\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.590383 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19cdd516-8b52-4b72-936c-37c619cda4a6-config-data\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.590453 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19cdd516-8b52-4b72-936c-37c619cda4a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.590592 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19cdd516-8b52-4b72-936c-37c619cda4a6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.693028 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19cdd516-8b52-4b72-936c-37c619cda4a6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.693113 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19cdd516-8b52-4b72-936c-37c619cda4a6-logs\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.693182 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19cdd516-8b52-4b72-936c-37c619cda4a6-public-tls-certs\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.693240 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhjxg\" (UniqueName: \"kubernetes.io/projected/19cdd516-8b52-4b72-936c-37c619cda4a6-kube-api-access-fhjxg\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.693277 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19cdd516-8b52-4b72-936c-37c619cda4a6-config-data\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.693305 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19cdd516-8b52-4b72-936c-37c619cda4a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.694041 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19cdd516-8b52-4b72-936c-37c619cda4a6-logs\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.699149 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19cdd516-8b52-4b72-936c-37c619cda4a6-public-tls-certs\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.699266 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19cdd516-8b52-4b72-936c-37c619cda4a6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.699447 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19cdd516-8b52-4b72-936c-37c619cda4a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.700065 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19cdd516-8b52-4b72-936c-37c619cda4a6-config-data\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.713306 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhjxg\" (UniqueName: \"kubernetes.io/projected/19cdd516-8b52-4b72-936c-37c619cda4a6-kube-api-access-fhjxg\") pod \"nova-api-0\" (UID: \"19cdd516-8b52-4b72-936c-37c619cda4a6\") " pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.805086 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 11:40:27 crc kubenswrapper[4678]: I1124 11:40:27.919063 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5e41e21-1a4c-4077-99a5-fae558577594" path="/var/lib/kubelet/pods/b5e41e21-1a4c-4077-99a5-fae558577594/volumes" Nov 24 11:40:28 crc kubenswrapper[4678]: I1124 11:40:28.282599 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 11:40:28 crc kubenswrapper[4678]: W1124 11:40:28.285554 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19cdd516_8b52_4b72_936c_37c619cda4a6.slice/crio-0d83cb1f69b0365aadf4bc612074abcb8cb5f275fdf06d1fe0c0978b21057f19 WatchSource:0}: Error finding container 0d83cb1f69b0365aadf4bc612074abcb8cb5f275fdf06d1fe0c0978b21057f19: Status 404 returned error can't find the container with id 0d83cb1f69b0365aadf4bc612074abcb8cb5f275fdf06d1fe0c0978b21057f19 Nov 24 11:40:28 crc kubenswrapper[4678]: I1124 11:40:28.379824 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"19cdd516-8b52-4b72-936c-37c619cda4a6","Type":"ContainerStarted","Data":"0d83cb1f69b0365aadf4bc612074abcb8cb5f275fdf06d1fe0c0978b21057f19"} Nov 24 11:40:28 crc kubenswrapper[4678]: I1124 11:40:28.638190 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="722acbe1-a292-43be-88ea-7759fb793035" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.241:8775/\": read tcp 10.217.0.2:44630->10.217.0.241:8775: read: connection reset by peer" Nov 24 11:40:28 crc kubenswrapper[4678]: I1124 11:40:28.638281 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="722acbe1-a292-43be-88ea-7759fb793035" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.241:8775/\": read tcp 10.217.0.2:44628->10.217.0.241:8775: read: connection reset by peer" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.147662 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.245377 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-config-data\") pod \"722acbe1-a292-43be-88ea-7759fb793035\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.245472 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-nova-metadata-tls-certs\") pod \"722acbe1-a292-43be-88ea-7759fb793035\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.245626 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722acbe1-a292-43be-88ea-7759fb793035-logs\") pod \"722acbe1-a292-43be-88ea-7759fb793035\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.245848 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-combined-ca-bundle\") pod \"722acbe1-a292-43be-88ea-7759fb793035\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.245930 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khvn8\" (UniqueName: \"kubernetes.io/projected/722acbe1-a292-43be-88ea-7759fb793035-kube-api-access-khvn8\") pod \"722acbe1-a292-43be-88ea-7759fb793035\" (UID: \"722acbe1-a292-43be-88ea-7759fb793035\") " Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.251714 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/722acbe1-a292-43be-88ea-7759fb793035-logs" (OuterVolumeSpecName: "logs") pod "722acbe1-a292-43be-88ea-7759fb793035" (UID: "722acbe1-a292-43be-88ea-7759fb793035"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.254076 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/722acbe1-a292-43be-88ea-7759fb793035-kube-api-access-khvn8" (OuterVolumeSpecName: "kube-api-access-khvn8") pod "722acbe1-a292-43be-88ea-7759fb793035" (UID: "722acbe1-a292-43be-88ea-7759fb793035"). InnerVolumeSpecName "kube-api-access-khvn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.303123 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-config-data" (OuterVolumeSpecName: "config-data") pod "722acbe1-a292-43be-88ea-7759fb793035" (UID: "722acbe1-a292-43be-88ea-7759fb793035"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.306038 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "722acbe1-a292-43be-88ea-7759fb793035" (UID: "722acbe1-a292-43be-88ea-7759fb793035"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.344769 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "722acbe1-a292-43be-88ea-7759fb793035" (UID: "722acbe1-a292-43be-88ea-7759fb793035"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.348555 4678 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722acbe1-a292-43be-88ea-7759fb793035-logs\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.348584 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.348594 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khvn8\" (UniqueName: \"kubernetes.io/projected/722acbe1-a292-43be-88ea-7759fb793035-kube-api-access-khvn8\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.348603 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.348612 4678 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/722acbe1-a292-43be-88ea-7759fb793035-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.371754 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.412258 4678 generic.go:334] "Generic (PLEG): container finished" podID="1e4b7173-e5ad-48ee-b578-4f67d6b0e832" containerID="7d667acb5cbb9eccf25d03dc036db657dcdae7e6f2b018bdf529e4f5ccadc2d1" exitCode=0 Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.412334 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1e4b7173-e5ad-48ee-b578-4f67d6b0e832","Type":"ContainerDied","Data":"7d667acb5cbb9eccf25d03dc036db657dcdae7e6f2b018bdf529e4f5ccadc2d1"} Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.412363 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1e4b7173-e5ad-48ee-b578-4f67d6b0e832","Type":"ContainerDied","Data":"63d002fe4533aa03dfd51fb09902049415459d234adae681e1334f5664cea3d7"} Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.412383 4678 scope.go:117] "RemoveContainer" containerID="7d667acb5cbb9eccf25d03dc036db657dcdae7e6f2b018bdf529e4f5ccadc2d1" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.412485 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.420181 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"19cdd516-8b52-4b72-936c-37c619cda4a6","Type":"ContainerStarted","Data":"1cc86addb171c4d18caa0d47733ab570df2a9bfd01055846155c4708f748c3f4"} Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.420360 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"19cdd516-8b52-4b72-936c-37c619cda4a6","Type":"ContainerStarted","Data":"8ab5a1a5267159c81a30b2edfffe12e6b2da0c35ea49cf53d5b286e997ea97be"} Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.429516 4678 generic.go:334] "Generic (PLEG): container finished" podID="722acbe1-a292-43be-88ea-7759fb793035" containerID="12fc6a9d37660edcf33bcb38bc03b8d9d4f67ad6e1eaa11c48bfea5ea0176935" exitCode=0 Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.429572 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"722acbe1-a292-43be-88ea-7759fb793035","Type":"ContainerDied","Data":"12fc6a9d37660edcf33bcb38bc03b8d9d4f67ad6e1eaa11c48bfea5ea0176935"} Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.429603 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"722acbe1-a292-43be-88ea-7759fb793035","Type":"ContainerDied","Data":"7d2784b4220e3b8c10608184b0e2151849ad7457144c6d3b7cbb0e8a419cace8"} Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.429888 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.439536 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.43952084 podStartE2EDuration="2.43952084s" podCreationTimestamp="2025-11-24 11:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:40:29.435930843 +0000 UTC m=+1440.366990482" watchObservedRunningTime="2025-11-24 11:40:29.43952084 +0000 UTC m=+1440.370580479" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.449873 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-config-data\") pod \"1e4b7173-e5ad-48ee-b578-4f67d6b0e832\" (UID: \"1e4b7173-e5ad-48ee-b578-4f67d6b0e832\") " Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.450248 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qvws\" (UniqueName: \"kubernetes.io/projected/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-kube-api-access-6qvws\") pod \"1e4b7173-e5ad-48ee-b578-4f67d6b0e832\" (UID: \"1e4b7173-e5ad-48ee-b578-4f67d6b0e832\") " Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.450300 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-combined-ca-bundle\") pod \"1e4b7173-e5ad-48ee-b578-4f67d6b0e832\" (UID: \"1e4b7173-e5ad-48ee-b578-4f67d6b0e832\") " Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.455839 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-kube-api-access-6qvws" (OuterVolumeSpecName: "kube-api-access-6qvws") pod "1e4b7173-e5ad-48ee-b578-4f67d6b0e832" (UID: "1e4b7173-e5ad-48ee-b578-4f67d6b0e832"). InnerVolumeSpecName "kube-api-access-6qvws". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.480918 4678 scope.go:117] "RemoveContainer" containerID="7d667acb5cbb9eccf25d03dc036db657dcdae7e6f2b018bdf529e4f5ccadc2d1" Nov 24 11:40:29 crc kubenswrapper[4678]: E1124 11:40:29.481436 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d667acb5cbb9eccf25d03dc036db657dcdae7e6f2b018bdf529e4f5ccadc2d1\": container with ID starting with 7d667acb5cbb9eccf25d03dc036db657dcdae7e6f2b018bdf529e4f5ccadc2d1 not found: ID does not exist" containerID="7d667acb5cbb9eccf25d03dc036db657dcdae7e6f2b018bdf529e4f5ccadc2d1" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.481480 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d667acb5cbb9eccf25d03dc036db657dcdae7e6f2b018bdf529e4f5ccadc2d1"} err="failed to get container status \"7d667acb5cbb9eccf25d03dc036db657dcdae7e6f2b018bdf529e4f5ccadc2d1\": rpc error: code = NotFound desc = could not find container \"7d667acb5cbb9eccf25d03dc036db657dcdae7e6f2b018bdf529e4f5ccadc2d1\": container with ID starting with 7d667acb5cbb9eccf25d03dc036db657dcdae7e6f2b018bdf529e4f5ccadc2d1 not found: ID does not exist" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.481499 4678 scope.go:117] "RemoveContainer" containerID="12fc6a9d37660edcf33bcb38bc03b8d9d4f67ad6e1eaa11c48bfea5ea0176935" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.482108 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.495628 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.500798 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-config-data" (OuterVolumeSpecName: "config-data") pod "1e4b7173-e5ad-48ee-b578-4f67d6b0e832" (UID: "1e4b7173-e5ad-48ee-b578-4f67d6b0e832"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.507324 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e4b7173-e5ad-48ee-b578-4f67d6b0e832" (UID: "1e4b7173-e5ad-48ee-b578-4f67d6b0e832"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.511108 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:40:29 crc kubenswrapper[4678]: E1124 11:40:29.511813 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="722acbe1-a292-43be-88ea-7759fb793035" containerName="nova-metadata-log" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.511838 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="722acbe1-a292-43be-88ea-7759fb793035" containerName="nova-metadata-log" Nov 24 11:40:29 crc kubenswrapper[4678]: E1124 11:40:29.511875 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="722acbe1-a292-43be-88ea-7759fb793035" containerName="nova-metadata-metadata" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.511882 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="722acbe1-a292-43be-88ea-7759fb793035" containerName="nova-metadata-metadata" Nov 24 11:40:29 crc kubenswrapper[4678]: E1124 11:40:29.511912 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e4b7173-e5ad-48ee-b578-4f67d6b0e832" containerName="nova-scheduler-scheduler" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.511918 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e4b7173-e5ad-48ee-b578-4f67d6b0e832" containerName="nova-scheduler-scheduler" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.512157 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="722acbe1-a292-43be-88ea-7759fb793035" containerName="nova-metadata-metadata" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.512181 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="722acbe1-a292-43be-88ea-7759fb793035" containerName="nova-metadata-log" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.512190 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e4b7173-e5ad-48ee-b578-4f67d6b0e832" containerName="nova-scheduler-scheduler" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.513548 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.517369 4678 scope.go:117] "RemoveContainer" containerID="a78ce56c9bd708c6bdcd654307e5537ea93beda341951abedaee7286bdaa1c2c" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.519894 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.520318 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.520936 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.552532 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34efe18e-641b-4f0c-a39b-94693f74d2bb-config-data\") pod \"nova-metadata-0\" (UID: \"34efe18e-641b-4f0c-a39b-94693f74d2bb\") " pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.552589 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34efe18e-641b-4f0c-a39b-94693f74d2bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34efe18e-641b-4f0c-a39b-94693f74d2bb\") " pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.552649 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjtpk\" (UniqueName: \"kubernetes.io/projected/34efe18e-641b-4f0c-a39b-94693f74d2bb-kube-api-access-sjtpk\") pod \"nova-metadata-0\" (UID: \"34efe18e-641b-4f0c-a39b-94693f74d2bb\") " pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.552712 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34efe18e-641b-4f0c-a39b-94693f74d2bb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"34efe18e-641b-4f0c-a39b-94693f74d2bb\") " pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.552792 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34efe18e-641b-4f0c-a39b-94693f74d2bb-logs\") pod \"nova-metadata-0\" (UID: \"34efe18e-641b-4f0c-a39b-94693f74d2bb\") " pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.553617 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qvws\" (UniqueName: \"kubernetes.io/projected/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-kube-api-access-6qvws\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.553644 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.553656 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4b7173-e5ad-48ee-b578-4f67d6b0e832-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.564473 4678 scope.go:117] "RemoveContainer" containerID="12fc6a9d37660edcf33bcb38bc03b8d9d4f67ad6e1eaa11c48bfea5ea0176935" Nov 24 11:40:29 crc kubenswrapper[4678]: E1124 11:40:29.565525 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12fc6a9d37660edcf33bcb38bc03b8d9d4f67ad6e1eaa11c48bfea5ea0176935\": container with ID starting with 12fc6a9d37660edcf33bcb38bc03b8d9d4f67ad6e1eaa11c48bfea5ea0176935 not found: ID does not exist" containerID="12fc6a9d37660edcf33bcb38bc03b8d9d4f67ad6e1eaa11c48bfea5ea0176935" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.565560 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12fc6a9d37660edcf33bcb38bc03b8d9d4f67ad6e1eaa11c48bfea5ea0176935"} err="failed to get container status \"12fc6a9d37660edcf33bcb38bc03b8d9d4f67ad6e1eaa11c48bfea5ea0176935\": rpc error: code = NotFound desc = could not find container \"12fc6a9d37660edcf33bcb38bc03b8d9d4f67ad6e1eaa11c48bfea5ea0176935\": container with ID starting with 12fc6a9d37660edcf33bcb38bc03b8d9d4f67ad6e1eaa11c48bfea5ea0176935 not found: ID does not exist" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.565581 4678 scope.go:117] "RemoveContainer" containerID="a78ce56c9bd708c6bdcd654307e5537ea93beda341951abedaee7286bdaa1c2c" Nov 24 11:40:29 crc kubenswrapper[4678]: E1124 11:40:29.566876 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a78ce56c9bd708c6bdcd654307e5537ea93beda341951abedaee7286bdaa1c2c\": container with ID starting with a78ce56c9bd708c6bdcd654307e5537ea93beda341951abedaee7286bdaa1c2c not found: ID does not exist" containerID="a78ce56c9bd708c6bdcd654307e5537ea93beda341951abedaee7286bdaa1c2c" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.566906 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a78ce56c9bd708c6bdcd654307e5537ea93beda341951abedaee7286bdaa1c2c"} err="failed to get container status \"a78ce56c9bd708c6bdcd654307e5537ea93beda341951abedaee7286bdaa1c2c\": rpc error: code = NotFound desc = could not find container \"a78ce56c9bd708c6bdcd654307e5537ea93beda341951abedaee7286bdaa1c2c\": container with ID starting with a78ce56c9bd708c6bdcd654307e5537ea93beda341951abedaee7286bdaa1c2c not found: ID does not exist" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.656024 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34efe18e-641b-4f0c-a39b-94693f74d2bb-config-data\") pod \"nova-metadata-0\" (UID: \"34efe18e-641b-4f0c-a39b-94693f74d2bb\") " pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.656071 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34efe18e-641b-4f0c-a39b-94693f74d2bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34efe18e-641b-4f0c-a39b-94693f74d2bb\") " pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.656119 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjtpk\" (UniqueName: \"kubernetes.io/projected/34efe18e-641b-4f0c-a39b-94693f74d2bb-kube-api-access-sjtpk\") pod \"nova-metadata-0\" (UID: \"34efe18e-641b-4f0c-a39b-94693f74d2bb\") " pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.656535 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34efe18e-641b-4f0c-a39b-94693f74d2bb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"34efe18e-641b-4f0c-a39b-94693f74d2bb\") " pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.656965 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34efe18e-641b-4f0c-a39b-94693f74d2bb-logs\") pod \"nova-metadata-0\" (UID: \"34efe18e-641b-4f0c-a39b-94693f74d2bb\") " pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.657455 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34efe18e-641b-4f0c-a39b-94693f74d2bb-logs\") pod \"nova-metadata-0\" (UID: \"34efe18e-641b-4f0c-a39b-94693f74d2bb\") " pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.661423 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34efe18e-641b-4f0c-a39b-94693f74d2bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34efe18e-641b-4f0c-a39b-94693f74d2bb\") " pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.661589 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34efe18e-641b-4f0c-a39b-94693f74d2bb-config-data\") pod \"nova-metadata-0\" (UID: \"34efe18e-641b-4f0c-a39b-94693f74d2bb\") " pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.663038 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34efe18e-641b-4f0c-a39b-94693f74d2bb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"34efe18e-641b-4f0c-a39b-94693f74d2bb\") " pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.683697 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjtpk\" (UniqueName: \"kubernetes.io/projected/34efe18e-641b-4f0c-a39b-94693f74d2bb-kube-api-access-sjtpk\") pod \"nova-metadata-0\" (UID: \"34efe18e-641b-4f0c-a39b-94693f74d2bb\") " pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.802997 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.829110 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.835825 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.870375 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.892080 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:40:29 crc kubenswrapper[4678]: I1124 11:40:29.925770 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.005512 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc52ccc-2152-40c4-a3ac-3d029a1f3e60-config-data\") pod \"nova-scheduler-0\" (UID: \"2cc52ccc-2152-40c4-a3ac-3d029a1f3e60\") " pod="openstack/nova-scheduler-0" Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.010498 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qssb8\" (UniqueName: \"kubernetes.io/projected/2cc52ccc-2152-40c4-a3ac-3d029a1f3e60-kube-api-access-qssb8\") pod \"nova-scheduler-0\" (UID: \"2cc52ccc-2152-40c4-a3ac-3d029a1f3e60\") " pod="openstack/nova-scheduler-0" Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.023254 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc52ccc-2152-40c4-a3ac-3d029a1f3e60-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2cc52ccc-2152-40c4-a3ac-3d029a1f3e60\") " pod="openstack/nova-scheduler-0" Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.028165 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e4b7173-e5ad-48ee-b578-4f67d6b0e832" path="/var/lib/kubelet/pods/1e4b7173-e5ad-48ee-b578-4f67d6b0e832/volumes" Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.029278 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="722acbe1-a292-43be-88ea-7759fb793035" path="/var/lib/kubelet/pods/722acbe1-a292-43be-88ea-7759fb793035/volumes" Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.030368 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.135166 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc52ccc-2152-40c4-a3ac-3d029a1f3e60-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2cc52ccc-2152-40c4-a3ac-3d029a1f3e60\") " pod="openstack/nova-scheduler-0" Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.135345 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc52ccc-2152-40c4-a3ac-3d029a1f3e60-config-data\") pod \"nova-scheduler-0\" (UID: \"2cc52ccc-2152-40c4-a3ac-3d029a1f3e60\") " pod="openstack/nova-scheduler-0" Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.135436 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qssb8\" (UniqueName: \"kubernetes.io/projected/2cc52ccc-2152-40c4-a3ac-3d029a1f3e60-kube-api-access-qssb8\") pod \"nova-scheduler-0\" (UID: \"2cc52ccc-2152-40c4-a3ac-3d029a1f3e60\") " pod="openstack/nova-scheduler-0" Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.141484 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc52ccc-2152-40c4-a3ac-3d029a1f3e60-config-data\") pod \"nova-scheduler-0\" (UID: \"2cc52ccc-2152-40c4-a3ac-3d029a1f3e60\") " pod="openstack/nova-scheduler-0" Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.143380 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc52ccc-2152-40c4-a3ac-3d029a1f3e60-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2cc52ccc-2152-40c4-a3ac-3d029a1f3e60\") " pod="openstack/nova-scheduler-0" Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.154703 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qssb8\" (UniqueName: \"kubernetes.io/projected/2cc52ccc-2152-40c4-a3ac-3d029a1f3e60-kube-api-access-qssb8\") pod \"nova-scheduler-0\" (UID: \"2cc52ccc-2152-40c4-a3ac-3d029a1f3e60\") " pod="openstack/nova-scheduler-0" Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.297325 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.297447 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.322184 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.478163 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 11:40:30 crc kubenswrapper[4678]: W1124 11:40:30.491971 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34efe18e_641b_4f0c_a39b_94693f74d2bb.slice/crio-ffa6a6c03726184e1170069863533b3f7439b4b6b95062e994a28dbed1357b2e WatchSource:0}: Error finding container ffa6a6c03726184e1170069863533b3f7439b4b6b95062e994a28dbed1357b2e: Status 404 returned error can't find the container with id ffa6a6c03726184e1170069863533b3f7439b4b6b95062e994a28dbed1357b2e Nov 24 11:40:30 crc kubenswrapper[4678]: I1124 11:40:30.867715 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 11:40:30 crc kubenswrapper[4678]: W1124 11:40:30.904201 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cc52ccc_2152_40c4_a3ac_3d029a1f3e60.slice/crio-8e85df95e285e306a2e7ac3b1073d0d7a297d24b00d0406400de97f1bcbf3061 WatchSource:0}: Error finding container 8e85df95e285e306a2e7ac3b1073d0d7a297d24b00d0406400de97f1bcbf3061: Status 404 returned error can't find the container with id 8e85df95e285e306a2e7ac3b1073d0d7a297d24b00d0406400de97f1bcbf3061 Nov 24 11:40:31 crc kubenswrapper[4678]: I1124 11:40:31.475018 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2cc52ccc-2152-40c4-a3ac-3d029a1f3e60","Type":"ContainerStarted","Data":"94fabaee543e54128693516b37f6ccc10b2091bbbce6721523078b271a562938"} Nov 24 11:40:31 crc kubenswrapper[4678]: I1124 11:40:31.475066 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2cc52ccc-2152-40c4-a3ac-3d029a1f3e60","Type":"ContainerStarted","Data":"8e85df95e285e306a2e7ac3b1073d0d7a297d24b00d0406400de97f1bcbf3061"} Nov 24 11:40:31 crc kubenswrapper[4678]: I1124 11:40:31.477221 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34efe18e-641b-4f0c-a39b-94693f74d2bb","Type":"ContainerStarted","Data":"aaa63fbc12dc685e552af11548203e4917ce29802d818f69a0a6a2c546944bbf"} Nov 24 11:40:31 crc kubenswrapper[4678]: I1124 11:40:31.477254 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34efe18e-641b-4f0c-a39b-94693f74d2bb","Type":"ContainerStarted","Data":"9e1c59078af515041d2718e9dd12b1718721eb3e74089f51313bcd17db5fea9e"} Nov 24 11:40:31 crc kubenswrapper[4678]: I1124 11:40:31.477268 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34efe18e-641b-4f0c-a39b-94693f74d2bb","Type":"ContainerStarted","Data":"ffa6a6c03726184e1170069863533b3f7439b4b6b95062e994a28dbed1357b2e"} Nov 24 11:40:31 crc kubenswrapper[4678]: I1124 11:40:31.505659 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.505637268 podStartE2EDuration="2.505637268s" podCreationTimestamp="2025-11-24 11:40:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:40:31.4929708 +0000 UTC m=+1442.424030529" watchObservedRunningTime="2025-11-24 11:40:31.505637268 +0000 UTC m=+1442.436696907" Nov 24 11:40:31 crc kubenswrapper[4678]: I1124 11:40:31.530428 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.5304041 podStartE2EDuration="2.5304041s" podCreationTimestamp="2025-11-24 11:40:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:40:31.526833335 +0000 UTC m=+1442.457892984" watchObservedRunningTime="2025-11-24 11:40:31.5304041 +0000 UTC m=+1442.461463739" Nov 24 11:40:32 crc kubenswrapper[4678]: I1124 11:40:32.563393 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h2mkf"] Nov 24 11:40:32 crc kubenswrapper[4678]: I1124 11:40:32.566449 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:32 crc kubenswrapper[4678]: I1124 11:40:32.590099 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2mkf"] Nov 24 11:40:32 crc kubenswrapper[4678]: I1124 11:40:32.697651 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204431f6-4a3b-4034-9777-ecefd6b17457-catalog-content\") pod \"redhat-marketplace-h2mkf\" (UID: \"204431f6-4a3b-4034-9777-ecefd6b17457\") " pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:32 crc kubenswrapper[4678]: I1124 11:40:32.697703 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88vpz\" (UniqueName: \"kubernetes.io/projected/204431f6-4a3b-4034-9777-ecefd6b17457-kube-api-access-88vpz\") pod \"redhat-marketplace-h2mkf\" (UID: \"204431f6-4a3b-4034-9777-ecefd6b17457\") " pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:32 crc kubenswrapper[4678]: I1124 11:40:32.698196 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204431f6-4a3b-4034-9777-ecefd6b17457-utilities\") pod \"redhat-marketplace-h2mkf\" (UID: \"204431f6-4a3b-4034-9777-ecefd6b17457\") " pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:32 crc kubenswrapper[4678]: I1124 11:40:32.800120 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204431f6-4a3b-4034-9777-ecefd6b17457-catalog-content\") pod \"redhat-marketplace-h2mkf\" (UID: \"204431f6-4a3b-4034-9777-ecefd6b17457\") " pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:32 crc kubenswrapper[4678]: I1124 11:40:32.800163 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88vpz\" (UniqueName: \"kubernetes.io/projected/204431f6-4a3b-4034-9777-ecefd6b17457-kube-api-access-88vpz\") pod \"redhat-marketplace-h2mkf\" (UID: \"204431f6-4a3b-4034-9777-ecefd6b17457\") " pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:32 crc kubenswrapper[4678]: I1124 11:40:32.800265 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204431f6-4a3b-4034-9777-ecefd6b17457-utilities\") pod \"redhat-marketplace-h2mkf\" (UID: \"204431f6-4a3b-4034-9777-ecefd6b17457\") " pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:32 crc kubenswrapper[4678]: I1124 11:40:32.800734 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204431f6-4a3b-4034-9777-ecefd6b17457-utilities\") pod \"redhat-marketplace-h2mkf\" (UID: \"204431f6-4a3b-4034-9777-ecefd6b17457\") " pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:32 crc kubenswrapper[4678]: I1124 11:40:32.800903 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204431f6-4a3b-4034-9777-ecefd6b17457-catalog-content\") pod \"redhat-marketplace-h2mkf\" (UID: \"204431f6-4a3b-4034-9777-ecefd6b17457\") " pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:32 crc kubenswrapper[4678]: I1124 11:40:32.822349 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88vpz\" (UniqueName: \"kubernetes.io/projected/204431f6-4a3b-4034-9777-ecefd6b17457-kube-api-access-88vpz\") pod \"redhat-marketplace-h2mkf\" (UID: \"204431f6-4a3b-4034-9777-ecefd6b17457\") " pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:32 crc kubenswrapper[4678]: I1124 11:40:32.894309 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:33 crc kubenswrapper[4678]: I1124 11:40:33.448770 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2mkf"] Nov 24 11:40:33 crc kubenswrapper[4678]: W1124 11:40:33.474642 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod204431f6_4a3b_4034_9777_ecefd6b17457.slice/crio-75f143f742c5dc39962593478521777939378fa4719253af52c0a41c40f1417f WatchSource:0}: Error finding container 75f143f742c5dc39962593478521777939378fa4719253af52c0a41c40f1417f: Status 404 returned error can't find the container with id 75f143f742c5dc39962593478521777939378fa4719253af52c0a41c40f1417f Nov 24 11:40:33 crc kubenswrapper[4678]: I1124 11:40:33.504911 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2mkf" event={"ID":"204431f6-4a3b-4034-9777-ecefd6b17457","Type":"ContainerStarted","Data":"75f143f742c5dc39962593478521777939378fa4719253af52c0a41c40f1417f"} Nov 24 11:40:34 crc kubenswrapper[4678]: I1124 11:40:34.519921 4678 generic.go:334] "Generic (PLEG): container finished" podID="204431f6-4a3b-4034-9777-ecefd6b17457" containerID="88fb9c62045b4dc362edb6f6dd927b852012457878eace6daad33a933e608932" exitCode=0 Nov 24 11:40:34 crc kubenswrapper[4678]: I1124 11:40:34.520219 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2mkf" event={"ID":"204431f6-4a3b-4034-9777-ecefd6b17457","Type":"ContainerDied","Data":"88fb9c62045b4dc362edb6f6dd927b852012457878eace6daad33a933e608932"} Nov 24 11:40:34 crc kubenswrapper[4678]: I1124 11:40:34.835987 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 11:40:34 crc kubenswrapper[4678]: I1124 11:40:34.836349 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 11:40:35 crc kubenswrapper[4678]: I1124 11:40:35.322774 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 11:40:35 crc kubenswrapper[4678]: I1124 11:40:35.535275 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2mkf" event={"ID":"204431f6-4a3b-4034-9777-ecefd6b17457","Type":"ContainerStarted","Data":"449aac29913811d007ae9e033dd682e63a3fa73494072d44ec10a60c67cafc59"} Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.546949 4678 generic.go:334] "Generic (PLEG): container finished" podID="204431f6-4a3b-4034-9777-ecefd6b17457" containerID="449aac29913811d007ae9e033dd682e63a3fa73494072d44ec10a60c67cafc59" exitCode=0 Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.547121 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2mkf" event={"ID":"204431f6-4a3b-4034-9777-ecefd6b17457","Type":"ContainerDied","Data":"449aac29913811d007ae9e033dd682e63a3fa73494072d44ec10a60c67cafc59"} Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.550597 4678 generic.go:334] "Generic (PLEG): container finished" podID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerID="96c531c4a57a2de56b3b6fa821d3cc8e221a68f6ff85ec020fa9f8c7fb238f5a" exitCode=137 Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.550629 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db4ec7ad-4c52-4fe5-b298-29a526184c2a","Type":"ContainerDied","Data":"96c531c4a57a2de56b3b6fa821d3cc8e221a68f6ff85ec020fa9f8c7fb238f5a"} Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.550651 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db4ec7ad-4c52-4fe5-b298-29a526184c2a","Type":"ContainerDied","Data":"267b02f03caba44314b7dc857333595aa7eb364a47337a53d1edde1f092e9a3d"} Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.550661 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="267b02f03caba44314b7dc857333595aa7eb364a47337a53d1edde1f092e9a3d" Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.612002 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.693925 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vzhk\" (UniqueName: \"kubernetes.io/projected/db4ec7ad-4c52-4fe5-b298-29a526184c2a-kube-api-access-8vzhk\") pod \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\" (UID: \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\") " Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.694517 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-scripts\") pod \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\" (UID: \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\") " Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.694803 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-combined-ca-bundle\") pod \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\" (UID: \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\") " Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.694990 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-config-data\") pod \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\" (UID: \"db4ec7ad-4c52-4fe5-b298-29a526184c2a\") " Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.705027 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4ec7ad-4c52-4fe5-b298-29a526184c2a-kube-api-access-8vzhk" (OuterVolumeSpecName: "kube-api-access-8vzhk") pod "db4ec7ad-4c52-4fe5-b298-29a526184c2a" (UID: "db4ec7ad-4c52-4fe5-b298-29a526184c2a"). InnerVolumeSpecName "kube-api-access-8vzhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.722840 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-scripts" (OuterVolumeSpecName: "scripts") pod "db4ec7ad-4c52-4fe5-b298-29a526184c2a" (UID: "db4ec7ad-4c52-4fe5-b298-29a526184c2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.799563 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.799601 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vzhk\" (UniqueName: \"kubernetes.io/projected/db4ec7ad-4c52-4fe5-b298-29a526184c2a-kube-api-access-8vzhk\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.852392 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-config-data" (OuterVolumeSpecName: "config-data") pod "db4ec7ad-4c52-4fe5-b298-29a526184c2a" (UID: "db4ec7ad-4c52-4fe5-b298-29a526184c2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.855809 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db4ec7ad-4c52-4fe5-b298-29a526184c2a" (UID: "db4ec7ad-4c52-4fe5-b298-29a526184c2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.901647 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:36 crc kubenswrapper[4678]: I1124 11:40:36.901688 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4ec7ad-4c52-4fe5-b298-29a526184c2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.563400 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.566219 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2mkf" event={"ID":"204431f6-4a3b-4034-9777-ecefd6b17457","Type":"ContainerStarted","Data":"d7fa12841709236f29a8bfbcce4110b9c20e73b8c19d8e49f010f101dbc02386"} Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.596321 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h2mkf" podStartSLOduration=3.1285950160000002 podStartE2EDuration="5.596298519s" podCreationTimestamp="2025-11-24 11:40:32 +0000 UTC" firstStartedPulling="2025-11-24 11:40:34.522268186 +0000 UTC m=+1445.453327825" lastFinishedPulling="2025-11-24 11:40:36.989971689 +0000 UTC m=+1447.921031328" observedRunningTime="2025-11-24 11:40:37.586028244 +0000 UTC m=+1448.517087893" watchObservedRunningTime="2025-11-24 11:40:37.596298519 +0000 UTC m=+1448.527358168" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.616149 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.626920 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.684887 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 24 11:40:37 crc kubenswrapper[4678]: E1124 11:40:37.686134 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerName="aodh-evaluator" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.686157 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerName="aodh-evaluator" Nov 24 11:40:37 crc kubenswrapper[4678]: E1124 11:40:37.686221 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerName="aodh-api" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.686228 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerName="aodh-api" Nov 24 11:40:37 crc kubenswrapper[4678]: E1124 11:40:37.686259 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerName="aodh-notifier" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.686265 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerName="aodh-notifier" Nov 24 11:40:37 crc kubenswrapper[4678]: E1124 11:40:37.686299 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerName="aodh-listener" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.686306 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerName="aodh-listener" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.686722 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerName="aodh-listener" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.686747 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerName="aodh-api" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.686759 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerName="aodh-notifier" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.686785 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" containerName="aodh-evaluator" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.691775 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.701643 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.702244 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bwbmq" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.702425 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.702570 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.704490 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.745311 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn6x7\" (UniqueName: \"kubernetes.io/projected/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-kube-api-access-qn6x7\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.745397 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-scripts\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.745445 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-config-data\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.745467 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-internal-tls-certs\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.745487 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-combined-ca-bundle\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.745574 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-public-tls-certs\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.780803 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.806391 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.806839 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.847626 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-public-tls-certs\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.847768 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn6x7\" (UniqueName: \"kubernetes.io/projected/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-kube-api-access-qn6x7\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.847821 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-scripts\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.847858 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-config-data\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.847884 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-internal-tls-certs\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.847902 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-combined-ca-bundle\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.853756 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-scripts\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.854413 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-public-tls-certs\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.854528 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-internal-tls-certs\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.858261 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-config-data\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.859689 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-combined-ca-bundle\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.916413 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db4ec7ad-4c52-4fe5-b298-29a526184c2a" path="/var/lib/kubelet/pods/db4ec7ad-4c52-4fe5-b298-29a526184c2a/volumes" Nov 24 11:40:37 crc kubenswrapper[4678]: I1124 11:40:37.942934 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn6x7\" (UniqueName: \"kubernetes.io/projected/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-kube-api-access-qn6x7\") pod \"aodh-0\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " pod="openstack/aodh-0" Nov 24 11:40:38 crc kubenswrapper[4678]: I1124 11:40:38.045639 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 11:40:38 crc kubenswrapper[4678]: W1124 11:40:38.560024 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddede9b01_c855_46bb_b17c_3ebc79ca3ff5.slice/crio-0b62396c00505e6e48d1618b4932ce7697ea463bf56f4d2fb88df8c0e9064c41 WatchSource:0}: Error finding container 0b62396c00505e6e48d1618b4932ce7697ea463bf56f4d2fb88df8c0e9064c41: Status 404 returned error can't find the container with id 0b62396c00505e6e48d1618b4932ce7697ea463bf56f4d2fb88df8c0e9064c41 Nov 24 11:40:38 crc kubenswrapper[4678]: I1124 11:40:38.574300 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"dede9b01-c855-46bb-b17c-3ebc79ca3ff5","Type":"ContainerStarted","Data":"0b62396c00505e6e48d1618b4932ce7697ea463bf56f4d2fb88df8c0e9064c41"} Nov 24 11:40:38 crc kubenswrapper[4678]: I1124 11:40:38.575516 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 11:40:38 crc kubenswrapper[4678]: I1124 11:40:38.888879 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="19cdd516-8b52-4b72-936c-37c619cda4a6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.253:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:40:38 crc kubenswrapper[4678]: I1124 11:40:38.888911 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="19cdd516-8b52-4b72-936c-37c619cda4a6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.253:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:40:38 crc kubenswrapper[4678]: I1124 11:40:38.948053 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t7wls"] Nov 24 11:40:38 crc kubenswrapper[4678]: I1124 11:40:38.950421 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:40:38 crc kubenswrapper[4678]: I1124 11:40:38.994036 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t7wls"] Nov 24 11:40:39 crc kubenswrapper[4678]: I1124 11:40:39.073272 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7xvk\" (UniqueName: \"kubernetes.io/projected/a785375f-ace8-49dd-be97-c175855a2ecd-kube-api-access-m7xvk\") pod \"redhat-operators-t7wls\" (UID: \"a785375f-ace8-49dd-be97-c175855a2ecd\") " pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:40:39 crc kubenswrapper[4678]: I1124 11:40:39.073321 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a785375f-ace8-49dd-be97-c175855a2ecd-utilities\") pod \"redhat-operators-t7wls\" (UID: \"a785375f-ace8-49dd-be97-c175855a2ecd\") " pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:40:39 crc kubenswrapper[4678]: I1124 11:40:39.073597 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a785375f-ace8-49dd-be97-c175855a2ecd-catalog-content\") pod \"redhat-operators-t7wls\" (UID: \"a785375f-ace8-49dd-be97-c175855a2ecd\") " pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:40:39 crc kubenswrapper[4678]: I1124 11:40:39.175936 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xvk\" (UniqueName: \"kubernetes.io/projected/a785375f-ace8-49dd-be97-c175855a2ecd-kube-api-access-m7xvk\") pod \"redhat-operators-t7wls\" (UID: \"a785375f-ace8-49dd-be97-c175855a2ecd\") " pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:40:39 crc kubenswrapper[4678]: I1124 11:40:39.175981 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a785375f-ace8-49dd-be97-c175855a2ecd-utilities\") pod \"redhat-operators-t7wls\" (UID: \"a785375f-ace8-49dd-be97-c175855a2ecd\") " pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:40:39 crc kubenswrapper[4678]: I1124 11:40:39.176074 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a785375f-ace8-49dd-be97-c175855a2ecd-catalog-content\") pod \"redhat-operators-t7wls\" (UID: \"a785375f-ace8-49dd-be97-c175855a2ecd\") " pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:40:39 crc kubenswrapper[4678]: I1124 11:40:39.176593 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a785375f-ace8-49dd-be97-c175855a2ecd-catalog-content\") pod \"redhat-operators-t7wls\" (UID: \"a785375f-ace8-49dd-be97-c175855a2ecd\") " pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:40:39 crc kubenswrapper[4678]: I1124 11:40:39.176594 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a785375f-ace8-49dd-be97-c175855a2ecd-utilities\") pod \"redhat-operators-t7wls\" (UID: \"a785375f-ace8-49dd-be97-c175855a2ecd\") " pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:40:39 crc kubenswrapper[4678]: I1124 11:40:39.198400 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xvk\" (UniqueName: \"kubernetes.io/projected/a785375f-ace8-49dd-be97-c175855a2ecd-kube-api-access-m7xvk\") pod \"redhat-operators-t7wls\" (UID: \"a785375f-ace8-49dd-be97-c175855a2ecd\") " pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:40:39 crc kubenswrapper[4678]: I1124 11:40:39.274913 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:40:39 crc kubenswrapper[4678]: I1124 11:40:39.586918 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"dede9b01-c855-46bb-b17c-3ebc79ca3ff5","Type":"ContainerStarted","Data":"8794fb3bae779b82e15f66fe9acb61f8e75ef61f136a9c64bfd670d9407e521c"} Nov 24 11:40:39 crc kubenswrapper[4678]: I1124 11:40:39.780487 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t7wls"] Nov 24 11:40:39 crc kubenswrapper[4678]: W1124 11:40:39.784310 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda785375f_ace8_49dd_be97_c175855a2ecd.slice/crio-4e42fc6bcdb3cecef49673ca8d1f93b67742f2e944adec8f9e87250e17f774f7 WatchSource:0}: Error finding container 4e42fc6bcdb3cecef49673ca8d1f93b67742f2e944adec8f9e87250e17f774f7: Status 404 returned error can't find the container with id 4e42fc6bcdb3cecef49673ca8d1f93b67742f2e944adec8f9e87250e17f774f7 Nov 24 11:40:39 crc kubenswrapper[4678]: I1124 11:40:39.838042 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 11:40:39 crc kubenswrapper[4678]: I1124 11:40:39.838115 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 11:40:40 crc kubenswrapper[4678]: I1124 11:40:40.326266 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 11:40:40 crc kubenswrapper[4678]: I1124 11:40:40.364436 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 11:40:40 crc kubenswrapper[4678]: I1124 11:40:40.601211 4678 generic.go:334] "Generic (PLEG): container finished" podID="a785375f-ace8-49dd-be97-c175855a2ecd" containerID="510c65502f92c488119279288c09420bf58433b244cac86639a9f2f5bab0c874" exitCode=0 Nov 24 11:40:40 crc kubenswrapper[4678]: I1124 11:40:40.601280 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7wls" event={"ID":"a785375f-ace8-49dd-be97-c175855a2ecd","Type":"ContainerDied","Data":"510c65502f92c488119279288c09420bf58433b244cac86639a9f2f5bab0c874"} Nov 24 11:40:40 crc kubenswrapper[4678]: I1124 11:40:40.601309 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7wls" event={"ID":"a785375f-ace8-49dd-be97-c175855a2ecd","Type":"ContainerStarted","Data":"4e42fc6bcdb3cecef49673ca8d1f93b67742f2e944adec8f9e87250e17f774f7"} Nov 24 11:40:40 crc kubenswrapper[4678]: I1124 11:40:40.617112 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"dede9b01-c855-46bb-b17c-3ebc79ca3ff5","Type":"ContainerStarted","Data":"ecdacfb168c696319b3f83b8abf5157ddc7034f2fa809f7b60d0b58f8a39fbec"} Nov 24 11:40:40 crc kubenswrapper[4678]: I1124 11:40:40.661174 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 11:40:40 crc kubenswrapper[4678]: I1124 11:40:40.853768 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="34efe18e-641b-4f0c-a39b-94693f74d2bb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:40:40 crc kubenswrapper[4678]: I1124 11:40:40.853843 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="34efe18e-641b-4f0c-a39b-94693f74d2bb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 11:40:41 crc kubenswrapper[4678]: I1124 11:40:41.647926 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7wls" event={"ID":"a785375f-ace8-49dd-be97-c175855a2ecd","Type":"ContainerStarted","Data":"2b2279e3e8fe0e61ebf1e772d681782db6460d6814839115fcd4f836e7929aa3"} Nov 24 11:40:41 crc kubenswrapper[4678]: I1124 11:40:41.653516 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"dede9b01-c855-46bb-b17c-3ebc79ca3ff5","Type":"ContainerStarted","Data":"38d25c4465104f0c39efc2438beef8a67615df4719e4b90ee704f716cbb70f74"} Nov 24 11:40:42 crc kubenswrapper[4678]: I1124 11:40:42.690643 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"dede9b01-c855-46bb-b17c-3ebc79ca3ff5","Type":"ContainerStarted","Data":"8c1f058f23d20600e023cc13524dfec570e01ffae5b8cdcf98c054b40705eace"} Nov 24 11:40:42 crc kubenswrapper[4678]: I1124 11:40:42.895059 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:42 crc kubenswrapper[4678]: I1124 11:40:42.897270 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:42 crc kubenswrapper[4678]: I1124 11:40:42.954392 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:42 crc kubenswrapper[4678]: I1124 11:40:42.980925 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.905591625 podStartE2EDuration="5.980901363s" podCreationTimestamp="2025-11-24 11:40:37 +0000 UTC" firstStartedPulling="2025-11-24 11:40:38.563136226 +0000 UTC m=+1449.494195865" lastFinishedPulling="2025-11-24 11:40:41.638445964 +0000 UTC m=+1452.569505603" observedRunningTime="2025-11-24 11:40:42.718617021 +0000 UTC m=+1453.649676660" watchObservedRunningTime="2025-11-24 11:40:42.980901363 +0000 UTC m=+1453.911961002" Nov 24 11:40:43 crc kubenswrapper[4678]: I1124 11:40:43.752355 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:44 crc kubenswrapper[4678]: I1124 11:40:44.122027 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2mkf"] Nov 24 11:40:45 crc kubenswrapper[4678]: I1124 11:40:45.725672 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h2mkf" podUID="204431f6-4a3b-4034-9777-ecefd6b17457" containerName="registry-server" containerID="cri-o://d7fa12841709236f29a8bfbcce4110b9c20e73b8c19d8e49f010f101dbc02386" gracePeriod=2 Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.539318 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.738155 4678 generic.go:334] "Generic (PLEG): container finished" podID="a785375f-ace8-49dd-be97-c175855a2ecd" containerID="2b2279e3e8fe0e61ebf1e772d681782db6460d6814839115fcd4f836e7929aa3" exitCode=0 Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.738219 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7wls" event={"ID":"a785375f-ace8-49dd-be97-c175855a2ecd","Type":"ContainerDied","Data":"2b2279e3e8fe0e61ebf1e772d681782db6460d6814839115fcd4f836e7929aa3"} Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.741173 4678 generic.go:334] "Generic (PLEG): container finished" podID="204431f6-4a3b-4034-9777-ecefd6b17457" containerID="d7fa12841709236f29a8bfbcce4110b9c20e73b8c19d8e49f010f101dbc02386" exitCode=0 Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.741199 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2mkf" event={"ID":"204431f6-4a3b-4034-9777-ecefd6b17457","Type":"ContainerDied","Data":"d7fa12841709236f29a8bfbcce4110b9c20e73b8c19d8e49f010f101dbc02386"} Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.741214 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2mkf" event={"ID":"204431f6-4a3b-4034-9777-ecefd6b17457","Type":"ContainerDied","Data":"75f143f742c5dc39962593478521777939378fa4719253af52c0a41c40f1417f"} Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.741224 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f143f742c5dc39962593478521777939378fa4719253af52c0a41c40f1417f" Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.778206 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.788079 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88vpz\" (UniqueName: \"kubernetes.io/projected/204431f6-4a3b-4034-9777-ecefd6b17457-kube-api-access-88vpz\") pod \"204431f6-4a3b-4034-9777-ecefd6b17457\" (UID: \"204431f6-4a3b-4034-9777-ecefd6b17457\") " Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.788263 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204431f6-4a3b-4034-9777-ecefd6b17457-catalog-content\") pod \"204431f6-4a3b-4034-9777-ecefd6b17457\" (UID: \"204431f6-4a3b-4034-9777-ecefd6b17457\") " Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.788296 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204431f6-4a3b-4034-9777-ecefd6b17457-utilities\") pod \"204431f6-4a3b-4034-9777-ecefd6b17457\" (UID: \"204431f6-4a3b-4034-9777-ecefd6b17457\") " Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.790147 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/204431f6-4a3b-4034-9777-ecefd6b17457-utilities" (OuterVolumeSpecName: "utilities") pod "204431f6-4a3b-4034-9777-ecefd6b17457" (UID: "204431f6-4a3b-4034-9777-ecefd6b17457"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.812460 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/204431f6-4a3b-4034-9777-ecefd6b17457-kube-api-access-88vpz" (OuterVolumeSpecName: "kube-api-access-88vpz") pod "204431f6-4a3b-4034-9777-ecefd6b17457" (UID: "204431f6-4a3b-4034-9777-ecefd6b17457"). InnerVolumeSpecName "kube-api-access-88vpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.821520 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/204431f6-4a3b-4034-9777-ecefd6b17457-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "204431f6-4a3b-4034-9777-ecefd6b17457" (UID: "204431f6-4a3b-4034-9777-ecefd6b17457"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.891411 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88vpz\" (UniqueName: \"kubernetes.io/projected/204431f6-4a3b-4034-9777-ecefd6b17457-kube-api-access-88vpz\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.891440 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204431f6-4a3b-4034-9777-ecefd6b17457-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:46 crc kubenswrapper[4678]: I1124 11:40:46.891450 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204431f6-4a3b-4034-9777-ecefd6b17457-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:47 crc kubenswrapper[4678]: I1124 11:40:47.754621 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7wls" event={"ID":"a785375f-ace8-49dd-be97-c175855a2ecd","Type":"ContainerStarted","Data":"af00aa936743955df601791ebff94f5cd5b57707ad340d74d029214e48cfcede"} Nov 24 11:40:47 crc kubenswrapper[4678]: I1124 11:40:47.754661 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2mkf" Nov 24 11:40:47 crc kubenswrapper[4678]: I1124 11:40:47.790356 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t7wls" podStartSLOduration=3.152805769 podStartE2EDuration="9.790337581s" podCreationTimestamp="2025-11-24 11:40:38 +0000 UTC" firstStartedPulling="2025-11-24 11:40:40.60555769 +0000 UTC m=+1451.536617329" lastFinishedPulling="2025-11-24 11:40:47.243089502 +0000 UTC m=+1458.174149141" observedRunningTime="2025-11-24 11:40:47.781795903 +0000 UTC m=+1458.712855542" watchObservedRunningTime="2025-11-24 11:40:47.790337581 +0000 UTC m=+1458.721397220" Nov 24 11:40:47 crc kubenswrapper[4678]: I1124 11:40:47.809605 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2mkf"] Nov 24 11:40:47 crc kubenswrapper[4678]: I1124 11:40:47.822766 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2mkf"] Nov 24 11:40:47 crc kubenswrapper[4678]: I1124 11:40:47.825412 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 11:40:47 crc kubenswrapper[4678]: I1124 11:40:47.825859 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 11:40:47 crc kubenswrapper[4678]: I1124 11:40:47.837116 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 11:40:47 crc kubenswrapper[4678]: I1124 11:40:47.844313 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 11:40:47 crc kubenswrapper[4678]: I1124 11:40:47.911575 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="204431f6-4a3b-4034-9777-ecefd6b17457" path="/var/lib/kubelet/pods/204431f6-4a3b-4034-9777-ecefd6b17457/volumes" Nov 24 11:40:48 crc kubenswrapper[4678]: I1124 11:40:48.766168 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 11:40:48 crc kubenswrapper[4678]: I1124 11:40:48.775760 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 11:40:49 crc kubenswrapper[4678]: I1124 11:40:49.275725 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:40:49 crc kubenswrapper[4678]: I1124 11:40:49.276163 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:40:49 crc kubenswrapper[4678]: I1124 11:40:49.917224 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 11:40:49 crc kubenswrapper[4678]: I1124 11:40:49.917349 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 11:40:49 crc kubenswrapper[4678]: I1124 11:40:49.937013 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 11:40:49 crc kubenswrapper[4678]: I1124 11:40:49.962390 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.334451 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t7wls" podUID="a785375f-ace8-49dd-be97-c175855a2ecd" containerName="registry-server" probeResult="failure" output=< Nov 24 11:40:50 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 11:40:50 crc kubenswrapper[4678]: > Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.531811 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4k52z"] Nov 24 11:40:50 crc kubenswrapper[4678]: E1124 11:40:50.532295 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204431f6-4a3b-4034-9777-ecefd6b17457" containerName="extract-utilities" Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.532312 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="204431f6-4a3b-4034-9777-ecefd6b17457" containerName="extract-utilities" Nov 24 11:40:50 crc kubenswrapper[4678]: E1124 11:40:50.532321 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204431f6-4a3b-4034-9777-ecefd6b17457" containerName="extract-content" Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.532328 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="204431f6-4a3b-4034-9777-ecefd6b17457" containerName="extract-content" Nov 24 11:40:50 crc kubenswrapper[4678]: E1124 11:40:50.532344 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204431f6-4a3b-4034-9777-ecefd6b17457" containerName="registry-server" Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.532351 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="204431f6-4a3b-4034-9777-ecefd6b17457" containerName="registry-server" Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.532562 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="204431f6-4a3b-4034-9777-ecefd6b17457" containerName="registry-server" Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.534136 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4k52z" Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.565849 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4k52z"] Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.614035 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e69f8c-3e27-40e9-9745-58c570b67749-utilities\") pod \"community-operators-4k52z\" (UID: \"92e69f8c-3e27-40e9-9745-58c570b67749\") " pod="openshift-marketplace/community-operators-4k52z" Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.614231 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwkvw\" (UniqueName: \"kubernetes.io/projected/92e69f8c-3e27-40e9-9745-58c570b67749-kube-api-access-zwkvw\") pod \"community-operators-4k52z\" (UID: \"92e69f8c-3e27-40e9-9745-58c570b67749\") " pod="openshift-marketplace/community-operators-4k52z" Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.614304 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e69f8c-3e27-40e9-9745-58c570b67749-catalog-content\") pod \"community-operators-4k52z\" (UID: \"92e69f8c-3e27-40e9-9745-58c570b67749\") " pod="openshift-marketplace/community-operators-4k52z" Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.717736 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwkvw\" (UniqueName: \"kubernetes.io/projected/92e69f8c-3e27-40e9-9745-58c570b67749-kube-api-access-zwkvw\") pod \"community-operators-4k52z\" (UID: \"92e69f8c-3e27-40e9-9745-58c570b67749\") " pod="openshift-marketplace/community-operators-4k52z" Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.717801 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e69f8c-3e27-40e9-9745-58c570b67749-catalog-content\") pod \"community-operators-4k52z\" (UID: \"92e69f8c-3e27-40e9-9745-58c570b67749\") " pod="openshift-marketplace/community-operators-4k52z" Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.718018 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e69f8c-3e27-40e9-9745-58c570b67749-utilities\") pod \"community-operators-4k52z\" (UID: \"92e69f8c-3e27-40e9-9745-58c570b67749\") " pod="openshift-marketplace/community-operators-4k52z" Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.718767 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e69f8c-3e27-40e9-9745-58c570b67749-utilities\") pod \"community-operators-4k52z\" (UID: \"92e69f8c-3e27-40e9-9745-58c570b67749\") " pod="openshift-marketplace/community-operators-4k52z" Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.718932 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e69f8c-3e27-40e9-9745-58c570b67749-catalog-content\") pod \"community-operators-4k52z\" (UID: \"92e69f8c-3e27-40e9-9745-58c570b67749\") " pod="openshift-marketplace/community-operators-4k52z" Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.744707 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwkvw\" (UniqueName: \"kubernetes.io/projected/92e69f8c-3e27-40e9-9745-58c570b67749-kube-api-access-zwkvw\") pod \"community-operators-4k52z\" (UID: \"92e69f8c-3e27-40e9-9745-58c570b67749\") " pod="openshift-marketplace/community-operators-4k52z" Nov 24 11:40:50 crc kubenswrapper[4678]: I1124 11:40:50.872213 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4k52z" Nov 24 11:40:51 crc kubenswrapper[4678]: I1124 11:40:51.440715 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4k52z"] Nov 24 11:40:51 crc kubenswrapper[4678]: I1124 11:40:51.612387 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:40:51 crc kubenswrapper[4678]: I1124 11:40:51.612658 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ddc6efef-042b-489a-a545-669ec3783e86" containerName="kube-state-metrics" containerID="cri-o://2e96bdbc0b9bc6563a6ab853bd2cad52c358a4f10d76fba029a0efa39c86cced" gracePeriod=30 Nov 24 11:40:51 crc kubenswrapper[4678]: I1124 11:40:51.826546 4678 generic.go:334] "Generic (PLEG): container finished" podID="92e69f8c-3e27-40e9-9745-58c570b67749" containerID="ff5fbea66b9e5fa8e01dd81e1b5d5161a5b71c5baba15954fd1a8b9c3dec200b" exitCode=0 Nov 24 11:40:51 crc kubenswrapper[4678]: I1124 11:40:51.826719 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k52z" event={"ID":"92e69f8c-3e27-40e9-9745-58c570b67749","Type":"ContainerDied","Data":"ff5fbea66b9e5fa8e01dd81e1b5d5161a5b71c5baba15954fd1a8b9c3dec200b"} Nov 24 11:40:51 crc kubenswrapper[4678]: I1124 11:40:51.826938 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k52z" event={"ID":"92e69f8c-3e27-40e9-9745-58c570b67749","Type":"ContainerStarted","Data":"e34fda1697be371f7557c953702d247891ac088fdfb7d1a30365fe829d29829a"} Nov 24 11:40:51 crc kubenswrapper[4678]: I1124 11:40:51.833518 4678 generic.go:334] "Generic (PLEG): container finished" podID="ddc6efef-042b-489a-a545-669ec3783e86" containerID="2e96bdbc0b9bc6563a6ab853bd2cad52c358a4f10d76fba029a0efa39c86cced" exitCode=2 Nov 24 11:40:51 crc kubenswrapper[4678]: I1124 11:40:51.834255 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ddc6efef-042b-489a-a545-669ec3783e86","Type":"ContainerDied","Data":"2e96bdbc0b9bc6563a6ab853bd2cad52c358a4f10d76fba029a0efa39c86cced"} Nov 24 11:40:51 crc kubenswrapper[4678]: I1124 11:40:51.861039 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 11:40:51 crc kubenswrapper[4678]: I1124 11:40:51.861257 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="70557cb4-7672-4047-a601-1cf7723d8c82" containerName="mysqld-exporter" containerID="cri-o://ea86e63fec205eea5944b439e7e5fb90b54451d0afbf34cbac9b9b2349efadef" gracePeriod=30 Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.285655 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.415106 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmm6b\" (UniqueName: \"kubernetes.io/projected/ddc6efef-042b-489a-a545-669ec3783e86-kube-api-access-lmm6b\") pod \"ddc6efef-042b-489a-a545-669ec3783e86\" (UID: \"ddc6efef-042b-489a-a545-669ec3783e86\") " Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.429769 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddc6efef-042b-489a-a545-669ec3783e86-kube-api-access-lmm6b" (OuterVolumeSpecName: "kube-api-access-lmm6b") pod "ddc6efef-042b-489a-a545-669ec3783e86" (UID: "ddc6efef-042b-489a-a545-669ec3783e86"). InnerVolumeSpecName "kube-api-access-lmm6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.510176 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.521331 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmm6b\" (UniqueName: \"kubernetes.io/projected/ddc6efef-042b-489a-a545-669ec3783e86-kube-api-access-lmm6b\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.623729 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsd7l\" (UniqueName: \"kubernetes.io/projected/70557cb4-7672-4047-a601-1cf7723d8c82-kube-api-access-tsd7l\") pod \"70557cb4-7672-4047-a601-1cf7723d8c82\" (UID: \"70557cb4-7672-4047-a601-1cf7723d8c82\") " Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.623918 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70557cb4-7672-4047-a601-1cf7723d8c82-config-data\") pod \"70557cb4-7672-4047-a601-1cf7723d8c82\" (UID: \"70557cb4-7672-4047-a601-1cf7723d8c82\") " Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.624097 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70557cb4-7672-4047-a601-1cf7723d8c82-combined-ca-bundle\") pod \"70557cb4-7672-4047-a601-1cf7723d8c82\" (UID: \"70557cb4-7672-4047-a601-1cf7723d8c82\") " Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.630707 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70557cb4-7672-4047-a601-1cf7723d8c82-kube-api-access-tsd7l" (OuterVolumeSpecName: "kube-api-access-tsd7l") pod "70557cb4-7672-4047-a601-1cf7723d8c82" (UID: "70557cb4-7672-4047-a601-1cf7723d8c82"). InnerVolumeSpecName "kube-api-access-tsd7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.736201 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsd7l\" (UniqueName: \"kubernetes.io/projected/70557cb4-7672-4047-a601-1cf7723d8c82-kube-api-access-tsd7l\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.746839 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70557cb4-7672-4047-a601-1cf7723d8c82-config-data" (OuterVolumeSpecName: "config-data") pod "70557cb4-7672-4047-a601-1cf7723d8c82" (UID: "70557cb4-7672-4047-a601-1cf7723d8c82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.774917 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70557cb4-7672-4047-a601-1cf7723d8c82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70557cb4-7672-4047-a601-1cf7723d8c82" (UID: "70557cb4-7672-4047-a601-1cf7723d8c82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.852413 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70557cb4-7672-4047-a601-1cf7723d8c82-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.852658 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70557cb4-7672-4047-a601-1cf7723d8c82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.898538 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ddc6efef-042b-489a-a545-669ec3783e86","Type":"ContainerDied","Data":"1e71d5d5aff9c10975e3dc5807721568c40b9c2f3181d551a3397954fb734bb4"} Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.898867 4678 scope.go:117] "RemoveContainer" containerID="2e96bdbc0b9bc6563a6ab853bd2cad52c358a4f10d76fba029a0efa39c86cced" Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.899252 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.950308 4678 generic.go:334] "Generic (PLEG): container finished" podID="70557cb4-7672-4047-a601-1cf7723d8c82" containerID="ea86e63fec205eea5944b439e7e5fb90b54451d0afbf34cbac9b9b2349efadef" exitCode=2 Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.950364 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"70557cb4-7672-4047-a601-1cf7723d8c82","Type":"ContainerDied","Data":"ea86e63fec205eea5944b439e7e5fb90b54451d0afbf34cbac9b9b2349efadef"} Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.950394 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"70557cb4-7672-4047-a601-1cf7723d8c82","Type":"ContainerDied","Data":"418204d2e88cefb56ad0bb50e697f17e22c671e977beee302179c3faf56f5deb"} Nov 24 11:40:52 crc kubenswrapper[4678]: I1124 11:40:52.950470 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.003841 4678 scope.go:117] "RemoveContainer" containerID="ea86e63fec205eea5944b439e7e5fb90b54451d0afbf34cbac9b9b2349efadef" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.042817 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.053022 4678 scope.go:117] "RemoveContainer" containerID="ea86e63fec205eea5944b439e7e5fb90b54451d0afbf34cbac9b9b2349efadef" Nov 24 11:40:53 crc kubenswrapper[4678]: E1124 11:40:53.060039 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea86e63fec205eea5944b439e7e5fb90b54451d0afbf34cbac9b9b2349efadef\": container with ID starting with ea86e63fec205eea5944b439e7e5fb90b54451d0afbf34cbac9b9b2349efadef not found: ID does not exist" containerID="ea86e63fec205eea5944b439e7e5fb90b54451d0afbf34cbac9b9b2349efadef" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.060106 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea86e63fec205eea5944b439e7e5fb90b54451d0afbf34cbac9b9b2349efadef"} err="failed to get container status \"ea86e63fec205eea5944b439e7e5fb90b54451d0afbf34cbac9b9b2349efadef\": rpc error: code = NotFound desc = could not find container \"ea86e63fec205eea5944b439e7e5fb90b54451d0afbf34cbac9b9b2349efadef\": container with ID starting with ea86e63fec205eea5944b439e7e5fb90b54451d0afbf34cbac9b9b2349efadef not found: ID does not exist" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.086793 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.101756 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:40:53 crc kubenswrapper[4678]: E1124 11:40:53.102364 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70557cb4-7672-4047-a601-1cf7723d8c82" containerName="mysqld-exporter" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.102382 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="70557cb4-7672-4047-a601-1cf7723d8c82" containerName="mysqld-exporter" Nov 24 11:40:53 crc kubenswrapper[4678]: E1124 11:40:53.102416 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc6efef-042b-489a-a545-669ec3783e86" containerName="kube-state-metrics" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.102425 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc6efef-042b-489a-a545-669ec3783e86" containerName="kube-state-metrics" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.102637 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddc6efef-042b-489a-a545-669ec3783e86" containerName="kube-state-metrics" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.102671 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="70557cb4-7672-4047-a601-1cf7723d8c82" containerName="mysqld-exporter" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.104370 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.115343 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.116150 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.126217 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.138443 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.149913 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.175708 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.177473 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.179231 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.179298 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.190054 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.285041 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzc8b\" (UniqueName: \"kubernetes.io/projected/a60ff952-7be9-480a-be2b-ffbe9bddd9ca-kube-api-access-hzc8b\") pod \"mysqld-exporter-0\" (UID: \"a60ff952-7be9-480a-be2b-ffbe9bddd9ca\") " pod="openstack/mysqld-exporter-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.285098 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/a60ff952-7be9-480a-be2b-ffbe9bddd9ca-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"a60ff952-7be9-480a-be2b-ffbe9bddd9ca\") " pod="openstack/mysqld-exporter-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.285161 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a60ff952-7be9-480a-be2b-ffbe9bddd9ca-config-data\") pod \"mysqld-exporter-0\" (UID: \"a60ff952-7be9-480a-be2b-ffbe9bddd9ca\") " pod="openstack/mysqld-exporter-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.285187 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0031b4-15dc-4530-89ae-ffec2f45e9f7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e0031b4-15dc-4530-89ae-ffec2f45e9f7\") " pod="openstack/kube-state-metrics-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.285419 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls5pf\" (UniqueName: \"kubernetes.io/projected/3e0031b4-15dc-4530-89ae-ffec2f45e9f7-kube-api-access-ls5pf\") pod \"kube-state-metrics-0\" (UID: \"3e0031b4-15dc-4530-89ae-ffec2f45e9f7\") " pod="openstack/kube-state-metrics-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.285550 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a60ff952-7be9-480a-be2b-ffbe9bddd9ca-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"a60ff952-7be9-480a-be2b-ffbe9bddd9ca\") " pod="openstack/mysqld-exporter-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.285660 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e0031b4-15dc-4530-89ae-ffec2f45e9f7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e0031b4-15dc-4530-89ae-ffec2f45e9f7\") " pod="openstack/kube-state-metrics-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.285769 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e0031b4-15dc-4530-89ae-ffec2f45e9f7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e0031b4-15dc-4530-89ae-ffec2f45e9f7\") " pod="openstack/kube-state-metrics-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.387466 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls5pf\" (UniqueName: \"kubernetes.io/projected/3e0031b4-15dc-4530-89ae-ffec2f45e9f7-kube-api-access-ls5pf\") pod \"kube-state-metrics-0\" (UID: \"3e0031b4-15dc-4530-89ae-ffec2f45e9f7\") " pod="openstack/kube-state-metrics-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.387534 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a60ff952-7be9-480a-be2b-ffbe9bddd9ca-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"a60ff952-7be9-480a-be2b-ffbe9bddd9ca\") " pod="openstack/mysqld-exporter-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.387567 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e0031b4-15dc-4530-89ae-ffec2f45e9f7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e0031b4-15dc-4530-89ae-ffec2f45e9f7\") " pod="openstack/kube-state-metrics-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.387597 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e0031b4-15dc-4530-89ae-ffec2f45e9f7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e0031b4-15dc-4530-89ae-ffec2f45e9f7\") " pod="openstack/kube-state-metrics-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.387716 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzc8b\" (UniqueName: \"kubernetes.io/projected/a60ff952-7be9-480a-be2b-ffbe9bddd9ca-kube-api-access-hzc8b\") pod \"mysqld-exporter-0\" (UID: \"a60ff952-7be9-480a-be2b-ffbe9bddd9ca\") " pod="openstack/mysqld-exporter-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.387746 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/a60ff952-7be9-480a-be2b-ffbe9bddd9ca-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"a60ff952-7be9-480a-be2b-ffbe9bddd9ca\") " pod="openstack/mysqld-exporter-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.387781 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a60ff952-7be9-480a-be2b-ffbe9bddd9ca-config-data\") pod \"mysqld-exporter-0\" (UID: \"a60ff952-7be9-480a-be2b-ffbe9bddd9ca\") " pod="openstack/mysqld-exporter-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.387806 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0031b4-15dc-4530-89ae-ffec2f45e9f7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e0031b4-15dc-4530-89ae-ffec2f45e9f7\") " pod="openstack/kube-state-metrics-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.393055 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e0031b4-15dc-4530-89ae-ffec2f45e9f7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e0031b4-15dc-4530-89ae-ffec2f45e9f7\") " pod="openstack/kube-state-metrics-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.402593 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a60ff952-7be9-480a-be2b-ffbe9bddd9ca-config-data\") pod \"mysqld-exporter-0\" (UID: \"a60ff952-7be9-480a-be2b-ffbe9bddd9ca\") " pod="openstack/mysqld-exporter-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.404413 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a60ff952-7be9-480a-be2b-ffbe9bddd9ca-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"a60ff952-7be9-480a-be2b-ffbe9bddd9ca\") " pod="openstack/mysqld-exporter-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.406082 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e0031b4-15dc-4530-89ae-ffec2f45e9f7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e0031b4-15dc-4530-89ae-ffec2f45e9f7\") " pod="openstack/kube-state-metrics-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.407874 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/a60ff952-7be9-480a-be2b-ffbe9bddd9ca-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"a60ff952-7be9-480a-be2b-ffbe9bddd9ca\") " pod="openstack/mysqld-exporter-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.411523 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls5pf\" (UniqueName: \"kubernetes.io/projected/3e0031b4-15dc-4530-89ae-ffec2f45e9f7-kube-api-access-ls5pf\") pod \"kube-state-metrics-0\" (UID: \"3e0031b4-15dc-4530-89ae-ffec2f45e9f7\") " pod="openstack/kube-state-metrics-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.412265 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0031b4-15dc-4530-89ae-ffec2f45e9f7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e0031b4-15dc-4530-89ae-ffec2f45e9f7\") " pod="openstack/kube-state-metrics-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.413405 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzc8b\" (UniqueName: \"kubernetes.io/projected/a60ff952-7be9-480a-be2b-ffbe9bddd9ca-kube-api-access-hzc8b\") pod \"mysqld-exporter-0\" (UID: \"a60ff952-7be9-480a-be2b-ffbe9bddd9ca\") " pod="openstack/mysqld-exporter-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.429881 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.507189 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.908361 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70557cb4-7672-4047-a601-1cf7723d8c82" path="/var/lib/kubelet/pods/70557cb4-7672-4047-a601-1cf7723d8c82/volumes" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.909224 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddc6efef-042b-489a-a545-669ec3783e86" path="/var/lib/kubelet/pods/ddc6efef-042b-489a-a545-669ec3783e86/volumes" Nov 24 11:40:53 crc kubenswrapper[4678]: I1124 11:40:53.995568 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 11:40:54 crc kubenswrapper[4678]: W1124 11:40:54.015782 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e0031b4_15dc_4530_89ae_ffec2f45e9f7.slice/crio-e9c6bb34fae590bd02b9e27af0e38765177b6653849b8f1e8e380cef94b5e4ba WatchSource:0}: Error finding container e9c6bb34fae590bd02b9e27af0e38765177b6653849b8f1e8e380cef94b5e4ba: Status 404 returned error can't find the container with id e9c6bb34fae590bd02b9e27af0e38765177b6653849b8f1e8e380cef94b5e4ba Nov 24 11:40:54 crc kubenswrapper[4678]: I1124 11:40:54.118232 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 11:40:54 crc kubenswrapper[4678]: I1124 11:40:54.851458 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:40:54 crc kubenswrapper[4678]: I1124 11:40:54.853278 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerName="ceilometer-central-agent" containerID="cri-o://7dc534d8b3a884ab52f5f27c1743263c94e0aed96ed946dacd23fde4a7a943f9" gracePeriod=30 Nov 24 11:40:54 crc kubenswrapper[4678]: I1124 11:40:54.853429 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerName="sg-core" containerID="cri-o://d76b0a2e4b95a70b50112bbeaf45b4946fcda9d416a53e9cc70f4e8651981102" gracePeriod=30 Nov 24 11:40:54 crc kubenswrapper[4678]: I1124 11:40:54.853473 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerName="ceilometer-notification-agent" containerID="cri-o://359d073d4e549827de8ab778b1d1c985ed22f21bc18a6ea5571e5dc54b59581b" gracePeriod=30 Nov 24 11:40:54 crc kubenswrapper[4678]: I1124 11:40:54.853474 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerName="proxy-httpd" containerID="cri-o://57d137219185175453c9359f39ae4ed11ba97cf8ca59d81b5e43f6a0b5bdb9da" gracePeriod=30 Nov 24 11:40:55 crc kubenswrapper[4678]: I1124 11:40:55.014451 4678 generic.go:334] "Generic (PLEG): container finished" podID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerID="57d137219185175453c9359f39ae4ed11ba97cf8ca59d81b5e43f6a0b5bdb9da" exitCode=0 Nov 24 11:40:55 crc kubenswrapper[4678]: I1124 11:40:55.014590 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63bc3a21-7960-4c56-8967-c43986fc8b05","Type":"ContainerDied","Data":"57d137219185175453c9359f39ae4ed11ba97cf8ca59d81b5e43f6a0b5bdb9da"} Nov 24 11:40:55 crc kubenswrapper[4678]: I1124 11:40:55.021301 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3e0031b4-15dc-4530-89ae-ffec2f45e9f7","Type":"ContainerStarted","Data":"e9c6bb34fae590bd02b9e27af0e38765177b6653849b8f1e8e380cef94b5e4ba"} Nov 24 11:40:55 crc kubenswrapper[4678]: I1124 11:40:55.027427 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a60ff952-7be9-480a-be2b-ffbe9bddd9ca","Type":"ContainerStarted","Data":"e7d7b09e794fdbf2e3dea9a9e55834e59489af9d47e42ceb27705b466ff2594a"} Nov 24 11:40:56 crc kubenswrapper[4678]: I1124 11:40:56.043995 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3e0031b4-15dc-4530-89ae-ffec2f45e9f7","Type":"ContainerStarted","Data":"eb7fe6340a872fd2d24a5383df0417559e5dacc16816d194000f48c41e197387"} Nov 24 11:40:56 crc kubenswrapper[4678]: I1124 11:40:56.044437 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 11:40:56 crc kubenswrapper[4678]: I1124 11:40:56.048975 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"a60ff952-7be9-480a-be2b-ffbe9bddd9ca","Type":"ContainerStarted","Data":"23599d02be4505e3897e38fd1612c4183598d896508cb506dbb41fc9ef33428f"} Nov 24 11:40:56 crc kubenswrapper[4678]: I1124 11:40:56.053402 4678 generic.go:334] "Generic (PLEG): container finished" podID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerID="d76b0a2e4b95a70b50112bbeaf45b4946fcda9d416a53e9cc70f4e8651981102" exitCode=2 Nov 24 11:40:56 crc kubenswrapper[4678]: I1124 11:40:56.053444 4678 generic.go:334] "Generic (PLEG): container finished" podID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerID="7dc534d8b3a884ab52f5f27c1743263c94e0aed96ed946dacd23fde4a7a943f9" exitCode=0 Nov 24 11:40:56 crc kubenswrapper[4678]: I1124 11:40:56.053473 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63bc3a21-7960-4c56-8967-c43986fc8b05","Type":"ContainerDied","Data":"d76b0a2e4b95a70b50112bbeaf45b4946fcda9d416a53e9cc70f4e8651981102"} Nov 24 11:40:56 crc kubenswrapper[4678]: I1124 11:40:56.053504 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63bc3a21-7960-4c56-8967-c43986fc8b05","Type":"ContainerDied","Data":"7dc534d8b3a884ab52f5f27c1743263c94e0aed96ed946dacd23fde4a7a943f9"} Nov 24 11:40:56 crc kubenswrapper[4678]: I1124 11:40:56.103721 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.50087136 podStartE2EDuration="3.103694866s" podCreationTimestamp="2025-11-24 11:40:53 +0000 UTC" firstStartedPulling="2025-11-24 11:40:54.128019647 +0000 UTC m=+1465.059079276" lastFinishedPulling="2025-11-24 11:40:54.730843143 +0000 UTC m=+1465.661902782" observedRunningTime="2025-11-24 11:40:56.098706342 +0000 UTC m=+1467.029765991" watchObservedRunningTime="2025-11-24 11:40:56.103694866 +0000 UTC m=+1467.034754515" Nov 24 11:40:56 crc kubenswrapper[4678]: I1124 11:40:56.132056 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.460550691 podStartE2EDuration="3.132035213s" podCreationTimestamp="2025-11-24 11:40:53 +0000 UTC" firstStartedPulling="2025-11-24 11:40:54.013425233 +0000 UTC m=+1464.944484872" lastFinishedPulling="2025-11-24 11:40:54.684909755 +0000 UTC m=+1465.615969394" observedRunningTime="2025-11-24 11:40:56.07992387 +0000 UTC m=+1467.010983509" watchObservedRunningTime="2025-11-24 11:40:56.132035213 +0000 UTC m=+1467.063094852" Nov 24 11:40:57 crc kubenswrapper[4678]: I1124 11:40:57.073791 4678 generic.go:334] "Generic (PLEG): container finished" podID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerID="359d073d4e549827de8ab778b1d1c985ed22f21bc18a6ea5571e5dc54b59581b" exitCode=0 Nov 24 11:40:57 crc kubenswrapper[4678]: I1124 11:40:57.073873 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63bc3a21-7960-4c56-8967-c43986fc8b05","Type":"ContainerDied","Data":"359d073d4e549827de8ab778b1d1c985ed22f21bc18a6ea5571e5dc54b59581b"} Nov 24 11:40:57 crc kubenswrapper[4678]: I1124 11:40:57.681909 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cbql5"] Nov 24 11:40:57 crc kubenswrapper[4678]: I1124 11:40:57.686165 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:40:57 crc kubenswrapper[4678]: I1124 11:40:57.696907 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cbql5"] Nov 24 11:40:57 crc kubenswrapper[4678]: I1124 11:40:57.708564 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt6zc\" (UniqueName: \"kubernetes.io/projected/b01e9d21-3cb3-4994-b184-c76a6e283ccb-kube-api-access-dt6zc\") pod \"certified-operators-cbql5\" (UID: \"b01e9d21-3cb3-4994-b184-c76a6e283ccb\") " pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:40:57 crc kubenswrapper[4678]: I1124 11:40:57.708733 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01e9d21-3cb3-4994-b184-c76a6e283ccb-catalog-content\") pod \"certified-operators-cbql5\" (UID: \"b01e9d21-3cb3-4994-b184-c76a6e283ccb\") " pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:40:57 crc kubenswrapper[4678]: I1124 11:40:57.708792 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01e9d21-3cb3-4994-b184-c76a6e283ccb-utilities\") pod \"certified-operators-cbql5\" (UID: \"b01e9d21-3cb3-4994-b184-c76a6e283ccb\") " pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:40:57 crc kubenswrapper[4678]: I1124 11:40:57.812204 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt6zc\" (UniqueName: \"kubernetes.io/projected/b01e9d21-3cb3-4994-b184-c76a6e283ccb-kube-api-access-dt6zc\") pod \"certified-operators-cbql5\" (UID: \"b01e9d21-3cb3-4994-b184-c76a6e283ccb\") " pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:40:57 crc kubenswrapper[4678]: I1124 11:40:57.812429 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01e9d21-3cb3-4994-b184-c76a6e283ccb-catalog-content\") pod \"certified-operators-cbql5\" (UID: \"b01e9d21-3cb3-4994-b184-c76a6e283ccb\") " pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:40:57 crc kubenswrapper[4678]: I1124 11:40:57.812509 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01e9d21-3cb3-4994-b184-c76a6e283ccb-utilities\") pod \"certified-operators-cbql5\" (UID: \"b01e9d21-3cb3-4994-b184-c76a6e283ccb\") " pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:40:57 crc kubenswrapper[4678]: I1124 11:40:57.813050 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01e9d21-3cb3-4994-b184-c76a6e283ccb-catalog-content\") pod \"certified-operators-cbql5\" (UID: \"b01e9d21-3cb3-4994-b184-c76a6e283ccb\") " pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:40:57 crc kubenswrapper[4678]: I1124 11:40:57.813324 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01e9d21-3cb3-4994-b184-c76a6e283ccb-utilities\") pod \"certified-operators-cbql5\" (UID: \"b01e9d21-3cb3-4994-b184-c76a6e283ccb\") " pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:40:57 crc kubenswrapper[4678]: I1124 11:40:57.878405 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt6zc\" (UniqueName: \"kubernetes.io/projected/b01e9d21-3cb3-4994-b184-c76a6e283ccb-kube-api-access-dt6zc\") pod \"certified-operators-cbql5\" (UID: \"b01e9d21-3cb3-4994-b184-c76a6e283ccb\") " pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:40:58 crc kubenswrapper[4678]: I1124 11:40:58.025396 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:40:59 crc kubenswrapper[4678]: I1124 11:40:59.798429 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:40:59 crc kubenswrapper[4678]: I1124 11:40:59.935391 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cbql5"] Nov 24 11:40:59 crc kubenswrapper[4678]: I1124 11:40:59.966068 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-combined-ca-bundle\") pod \"63bc3a21-7960-4c56-8967-c43986fc8b05\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " Nov 24 11:40:59 crc kubenswrapper[4678]: I1124 11:40:59.966250 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63bc3a21-7960-4c56-8967-c43986fc8b05-run-httpd\") pod \"63bc3a21-7960-4c56-8967-c43986fc8b05\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " Nov 24 11:40:59 crc kubenswrapper[4678]: I1124 11:40:59.966346 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-config-data\") pod \"63bc3a21-7960-4c56-8967-c43986fc8b05\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " Nov 24 11:40:59 crc kubenswrapper[4678]: I1124 11:40:59.966401 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63bc3a21-7960-4c56-8967-c43986fc8b05-log-httpd\") pod \"63bc3a21-7960-4c56-8967-c43986fc8b05\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " Nov 24 11:40:59 crc kubenswrapper[4678]: I1124 11:40:59.966438 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hshg5\" (UniqueName: \"kubernetes.io/projected/63bc3a21-7960-4c56-8967-c43986fc8b05-kube-api-access-hshg5\") pod \"63bc3a21-7960-4c56-8967-c43986fc8b05\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " Nov 24 11:40:59 crc kubenswrapper[4678]: I1124 11:40:59.966556 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-scripts\") pod \"63bc3a21-7960-4c56-8967-c43986fc8b05\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " Nov 24 11:40:59 crc kubenswrapper[4678]: I1124 11:40:59.966587 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-sg-core-conf-yaml\") pod \"63bc3a21-7960-4c56-8967-c43986fc8b05\" (UID: \"63bc3a21-7960-4c56-8967-c43986fc8b05\") " Nov 24 11:40:59 crc kubenswrapper[4678]: I1124 11:40:59.966626 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63bc3a21-7960-4c56-8967-c43986fc8b05-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "63bc3a21-7960-4c56-8967-c43986fc8b05" (UID: "63bc3a21-7960-4c56-8967-c43986fc8b05"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:40:59 crc kubenswrapper[4678]: I1124 11:40:59.967275 4678 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63bc3a21-7960-4c56-8967-c43986fc8b05-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:40:59 crc kubenswrapper[4678]: I1124 11:40:59.968102 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63bc3a21-7960-4c56-8967-c43986fc8b05-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "63bc3a21-7960-4c56-8967-c43986fc8b05" (UID: "63bc3a21-7960-4c56-8967-c43986fc8b05"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:40:59 crc kubenswrapper[4678]: I1124 11:40:59.982842 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-scripts" (OuterVolumeSpecName: "scripts") pod "63bc3a21-7960-4c56-8967-c43986fc8b05" (UID: "63bc3a21-7960-4c56-8967-c43986fc8b05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:40:59 crc kubenswrapper[4678]: I1124 11:40:59.982899 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63bc3a21-7960-4c56-8967-c43986fc8b05-kube-api-access-hshg5" (OuterVolumeSpecName: "kube-api-access-hshg5") pod "63bc3a21-7960-4c56-8967-c43986fc8b05" (UID: "63bc3a21-7960-4c56-8967-c43986fc8b05"). InnerVolumeSpecName "kube-api-access-hshg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.011639 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "63bc3a21-7960-4c56-8967-c43986fc8b05" (UID: "63bc3a21-7960-4c56-8967-c43986fc8b05"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.070474 4678 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63bc3a21-7960-4c56-8967-c43986fc8b05-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.070746 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hshg5\" (UniqueName: \"kubernetes.io/projected/63bc3a21-7960-4c56-8967-c43986fc8b05-kube-api-access-hshg5\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.070813 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.070833 4678 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.080688 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63bc3a21-7960-4c56-8967-c43986fc8b05" (UID: "63bc3a21-7960-4c56-8967-c43986fc8b05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.123870 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63bc3a21-7960-4c56-8967-c43986fc8b05","Type":"ContainerDied","Data":"40a6ab284cea15b3b9be35e55cbdec0ab1e4220b5c69ce63b18a329c0497a0b9"} Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.123929 4678 scope.go:117] "RemoveContainer" containerID="57d137219185175453c9359f39ae4ed11ba97cf8ca59d81b5e43f6a0b5bdb9da" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.123838 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.125610 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbql5" event={"ID":"b01e9d21-3cb3-4994-b184-c76a6e283ccb","Type":"ContainerStarted","Data":"d4e90fd7d98e9f607415c7995f322e53bd60356b2a1e064565bd9d0ca818431b"} Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.137816 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k52z" event={"ID":"92e69f8c-3e27-40e9-9745-58c570b67749","Type":"ContainerStarted","Data":"be818df800eef90ac7f420e0d7a149f5e65e09955f8adcd0f383f71ae326c2e8"} Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.151401 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-config-data" (OuterVolumeSpecName: "config-data") pod "63bc3a21-7960-4c56-8967-c43986fc8b05" (UID: "63bc3a21-7960-4c56-8967-c43986fc8b05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.162862 4678 scope.go:117] "RemoveContainer" containerID="d76b0a2e4b95a70b50112bbeaf45b4946fcda9d416a53e9cc70f4e8651981102" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.173310 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.173364 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63bc3a21-7960-4c56-8967-c43986fc8b05-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.189938 4678 scope.go:117] "RemoveContainer" containerID="359d073d4e549827de8ab778b1d1c985ed22f21bc18a6ea5571e5dc54b59581b" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.297194 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.297257 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.297302 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.298188 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.298249 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" gracePeriod=600 Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.350949 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t7wls" podUID="a785375f-ace8-49dd-be97-c175855a2ecd" containerName="registry-server" probeResult="failure" output=< Nov 24 11:41:00 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 11:41:00 crc kubenswrapper[4678]: > Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.383021 4678 scope.go:117] "RemoveContainer" containerID="7dc534d8b3a884ab52f5f27c1743263c94e0aed96ed946dacd23fde4a7a943f9" Nov 24 11:41:00 crc kubenswrapper[4678]: E1124 11:41:00.426986 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.487942 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.498841 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.510811 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:41:00 crc kubenswrapper[4678]: E1124 11:41:00.511334 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerName="ceilometer-central-agent" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.511358 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerName="ceilometer-central-agent" Nov 24 11:41:00 crc kubenswrapper[4678]: E1124 11:41:00.511444 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerName="ceilometer-notification-agent" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.511453 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerName="ceilometer-notification-agent" Nov 24 11:41:00 crc kubenswrapper[4678]: E1124 11:41:00.511474 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerName="proxy-httpd" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.511482 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerName="proxy-httpd" Nov 24 11:41:00 crc kubenswrapper[4678]: E1124 11:41:00.511493 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerName="sg-core" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.511500 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerName="sg-core" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.511720 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerName="proxy-httpd" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.511761 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerName="sg-core" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.511779 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerName="ceilometer-notification-agent" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.511798 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" containerName="ceilometer-central-agent" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.526354 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.537308 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.571655 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.571818 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.572091 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.687663 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.687728 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94757caa-5918-4bc8-89c0-587e0cafd70c-log-httpd\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.687789 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.687830 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.687850 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-scripts\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.687929 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94757caa-5918-4bc8-89c0-587e0cafd70c-run-httpd\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.688052 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t2t7\" (UniqueName: \"kubernetes.io/projected/94757caa-5918-4bc8-89c0-587e0cafd70c-kube-api-access-2t2t7\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.688096 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-config-data\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.789630 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.789697 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-scripts\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.789779 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94757caa-5918-4bc8-89c0-587e0cafd70c-run-httpd\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.789903 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t2t7\" (UniqueName: \"kubernetes.io/projected/94757caa-5918-4bc8-89c0-587e0cafd70c-kube-api-access-2t2t7\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.789957 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-config-data\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.789998 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.790018 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94757caa-5918-4bc8-89c0-587e0cafd70c-log-httpd\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.790074 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.790248 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94757caa-5918-4bc8-89c0-587e0cafd70c-run-httpd\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.790562 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94757caa-5918-4bc8-89c0-587e0cafd70c-log-httpd\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.795780 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-config-data\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.796964 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.798292 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.799613 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.802731 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-scripts\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.806531 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t2t7\" (UniqueName: \"kubernetes.io/projected/94757caa-5918-4bc8-89c0-587e0cafd70c-kube-api-access-2t2t7\") pod \"ceilometer-0\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " pod="openstack/ceilometer-0" Nov 24 11:41:00 crc kubenswrapper[4678]: I1124 11:41:00.882241 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:41:01 crc kubenswrapper[4678]: I1124 11:41:01.156756 4678 generic.go:334] "Generic (PLEG): container finished" podID="b01e9d21-3cb3-4994-b184-c76a6e283ccb" containerID="837e763feff0a16f91cd09a84e8eedc9d02e7a4289a5756cbfb5e69e0608b336" exitCode=0 Nov 24 11:41:01 crc kubenswrapper[4678]: I1124 11:41:01.156963 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbql5" event={"ID":"b01e9d21-3cb3-4994-b184-c76a6e283ccb","Type":"ContainerDied","Data":"837e763feff0a16f91cd09a84e8eedc9d02e7a4289a5756cbfb5e69e0608b336"} Nov 24 11:41:01 crc kubenswrapper[4678]: I1124 11:41:01.194967 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" exitCode=0 Nov 24 11:41:01 crc kubenswrapper[4678]: I1124 11:41:01.195121 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363"} Nov 24 11:41:01 crc kubenswrapper[4678]: I1124 11:41:01.195167 4678 scope.go:117] "RemoveContainer" containerID="dd5ea218f678046a66e5b35e3df6bfeb83c4a006c488a84e5029cd1536ff6717" Nov 24 11:41:01 crc kubenswrapper[4678]: I1124 11:41:01.196333 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:41:01 crc kubenswrapper[4678]: E1124 11:41:01.196864 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:41:01 crc kubenswrapper[4678]: I1124 11:41:01.209821 4678 generic.go:334] "Generic (PLEG): container finished" podID="92e69f8c-3e27-40e9-9745-58c570b67749" containerID="be818df800eef90ac7f420e0d7a149f5e65e09955f8adcd0f383f71ae326c2e8" exitCode=0 Nov 24 11:41:01 crc kubenswrapper[4678]: I1124 11:41:01.209924 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k52z" event={"ID":"92e69f8c-3e27-40e9-9745-58c570b67749","Type":"ContainerDied","Data":"be818df800eef90ac7f420e0d7a149f5e65e09955f8adcd0f383f71ae326c2e8"} Nov 24 11:41:01 crc kubenswrapper[4678]: W1124 11:41:01.352502 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94757caa_5918_4bc8_89c0_587e0cafd70c.slice/crio-93fccd3594e69213afba23877b90ad8ac63274d910634c1471d61717d056542f WatchSource:0}: Error finding container 93fccd3594e69213afba23877b90ad8ac63274d910634c1471d61717d056542f: Status 404 returned error can't find the container with id 93fccd3594e69213afba23877b90ad8ac63274d910634c1471d61717d056542f Nov 24 11:41:01 crc kubenswrapper[4678]: I1124 11:41:01.357986 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:41:01 crc kubenswrapper[4678]: I1124 11:41:01.912711 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63bc3a21-7960-4c56-8967-c43986fc8b05" path="/var/lib/kubelet/pods/63bc3a21-7960-4c56-8967-c43986fc8b05/volumes" Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.246874 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-dnf2l"] Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.284899 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-dnf2l"] Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.306355 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-crx7v"] Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.311020 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-crx7v" Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.328959 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-crx7v"] Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.341869 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7fab76-c5f4-450f-be9b-d433395cbcf3-config-data\") pod \"heat-db-sync-crx7v\" (UID: \"7e7fab76-c5f4-450f-be9b-d433395cbcf3\") " pod="openstack/heat-db-sync-crx7v" Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.341978 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7fab76-c5f4-450f-be9b-d433395cbcf3-combined-ca-bundle\") pod \"heat-db-sync-crx7v\" (UID: \"7e7fab76-c5f4-450f-be9b-d433395cbcf3\") " pod="openstack/heat-db-sync-crx7v" Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.342079 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz6mf\" (UniqueName: \"kubernetes.io/projected/7e7fab76-c5f4-450f-be9b-d433395cbcf3-kube-api-access-cz6mf\") pod \"heat-db-sync-crx7v\" (UID: \"7e7fab76-c5f4-450f-be9b-d433395cbcf3\") " pod="openstack/heat-db-sync-crx7v" Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.345276 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94757caa-5918-4bc8-89c0-587e0cafd70c","Type":"ContainerStarted","Data":"9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb"} Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.345329 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94757caa-5918-4bc8-89c0-587e0cafd70c","Type":"ContainerStarted","Data":"93fccd3594e69213afba23877b90ad8ac63274d910634c1471d61717d056542f"} Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.354460 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k52z" event={"ID":"92e69f8c-3e27-40e9-9745-58c570b67749","Type":"ContainerStarted","Data":"0bca8608b767b68e3ee95b94418253bd50a1d78623f05cbdd3c5b36dcfa75f49"} Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.392638 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4k52z" podStartSLOduration=2.5174948600000002 podStartE2EDuration="12.392620377s" podCreationTimestamp="2025-11-24 11:40:50 +0000 UTC" firstStartedPulling="2025-11-24 11:40:51.828670155 +0000 UTC m=+1462.759741324" lastFinishedPulling="2025-11-24 11:41:01.703807192 +0000 UTC m=+1472.634866841" observedRunningTime="2025-11-24 11:41:02.379960388 +0000 UTC m=+1473.311020037" watchObservedRunningTime="2025-11-24 11:41:02.392620377 +0000 UTC m=+1473.323680016" Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.444375 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7fab76-c5f4-450f-be9b-d433395cbcf3-combined-ca-bundle\") pod \"heat-db-sync-crx7v\" (UID: \"7e7fab76-c5f4-450f-be9b-d433395cbcf3\") " pod="openstack/heat-db-sync-crx7v" Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.444508 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz6mf\" (UniqueName: \"kubernetes.io/projected/7e7fab76-c5f4-450f-be9b-d433395cbcf3-kube-api-access-cz6mf\") pod \"heat-db-sync-crx7v\" (UID: \"7e7fab76-c5f4-450f-be9b-d433395cbcf3\") " pod="openstack/heat-db-sync-crx7v" Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.444602 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7fab76-c5f4-450f-be9b-d433395cbcf3-config-data\") pod \"heat-db-sync-crx7v\" (UID: \"7e7fab76-c5f4-450f-be9b-d433395cbcf3\") " pod="openstack/heat-db-sync-crx7v" Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.450952 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7fab76-c5f4-450f-be9b-d433395cbcf3-config-data\") pod \"heat-db-sync-crx7v\" (UID: \"7e7fab76-c5f4-450f-be9b-d433395cbcf3\") " pod="openstack/heat-db-sync-crx7v" Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.451420 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7fab76-c5f4-450f-be9b-d433395cbcf3-combined-ca-bundle\") pod \"heat-db-sync-crx7v\" (UID: \"7e7fab76-c5f4-450f-be9b-d433395cbcf3\") " pod="openstack/heat-db-sync-crx7v" Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.465897 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz6mf\" (UniqueName: \"kubernetes.io/projected/7e7fab76-c5f4-450f-be9b-d433395cbcf3-kube-api-access-cz6mf\") pod \"heat-db-sync-crx7v\" (UID: \"7e7fab76-c5f4-450f-be9b-d433395cbcf3\") " pod="openstack/heat-db-sync-crx7v" Nov 24 11:41:02 crc kubenswrapper[4678]: I1124 11:41:02.698849 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-crx7v" Nov 24 11:41:03 crc kubenswrapper[4678]: I1124 11:41:03.315066 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-crx7v"] Nov 24 11:41:03 crc kubenswrapper[4678]: W1124 11:41:03.315127 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e7fab76_c5f4_450f_be9b_d433395cbcf3.slice/crio-40d41909126cf461d379474ff8fcd031f350b56c52c5401b6d5f876cff7cc23c WatchSource:0}: Error finding container 40d41909126cf461d379474ff8fcd031f350b56c52c5401b6d5f876cff7cc23c: Status 404 returned error can't find the container with id 40d41909126cf461d379474ff8fcd031f350b56c52c5401b6d5f876cff7cc23c Nov 24 11:41:03 crc kubenswrapper[4678]: I1124 11:41:03.368472 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbql5" event={"ID":"b01e9d21-3cb3-4994-b184-c76a6e283ccb","Type":"ContainerStarted","Data":"84027399cab545bea6758e80ec59655c6d417fad0c0ff94a1e6e4cef81646603"} Nov 24 11:41:03 crc kubenswrapper[4678]: I1124 11:41:03.371297 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-crx7v" event={"ID":"7e7fab76-c5f4-450f-be9b-d433395cbcf3","Type":"ContainerStarted","Data":"40d41909126cf461d379474ff8fcd031f350b56c52c5401b6d5f876cff7cc23c"} Nov 24 11:41:03 crc kubenswrapper[4678]: I1124 11:41:03.379512 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94757caa-5918-4bc8-89c0-587e0cafd70c","Type":"ContainerStarted","Data":"97d2ad02d3be40e19c8d32cdb8b0a6e987f6102ef0c0415257ea66575c45a39a"} Nov 24 11:41:03 crc kubenswrapper[4678]: I1124 11:41:03.462114 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 11:41:03 crc kubenswrapper[4678]: I1124 11:41:03.925226 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fbb2c05-03d0-41ad-b306-0d196383c147" path="/var/lib/kubelet/pods/3fbb2c05-03d0-41ad-b306-0d196383c147/volumes" Nov 24 11:41:05 crc kubenswrapper[4678]: I1124 11:41:05.254754 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:41:05 crc kubenswrapper[4678]: I1124 11:41:05.363168 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:41:05 crc kubenswrapper[4678]: I1124 11:41:05.430472 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94757caa-5918-4bc8-89c0-587e0cafd70c","Type":"ContainerStarted","Data":"c934ed7087f37a052070a27e281cebdc9fab1860d6ded382b7aba605e67bb07a"} Nov 24 11:41:05 crc kubenswrapper[4678]: I1124 11:41:05.443630 4678 generic.go:334] "Generic (PLEG): container finished" podID="b01e9d21-3cb3-4994-b184-c76a6e283ccb" containerID="84027399cab545bea6758e80ec59655c6d417fad0c0ff94a1e6e4cef81646603" exitCode=0 Nov 24 11:41:05 crc kubenswrapper[4678]: I1124 11:41:05.443721 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbql5" event={"ID":"b01e9d21-3cb3-4994-b184-c76a6e283ccb","Type":"ContainerDied","Data":"84027399cab545bea6758e80ec59655c6d417fad0c0ff94a1e6e4cef81646603"} Nov 24 11:41:07 crc kubenswrapper[4678]: I1124 11:41:07.473545 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94757caa-5918-4bc8-89c0-587e0cafd70c","Type":"ContainerStarted","Data":"ee42929e4c5a9e2365889902090ca7b04f8252cabe7d098abb6b69b4ee771a8b"} Nov 24 11:41:07 crc kubenswrapper[4678]: I1124 11:41:07.474125 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:41:07 crc kubenswrapper[4678]: I1124 11:41:07.477251 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbql5" event={"ID":"b01e9d21-3cb3-4994-b184-c76a6e283ccb","Type":"ContainerStarted","Data":"b6d78dad636f4207ab4f150e1fe68bf9bd8bd3c5b9e4638f17dc940839b8637e"} Nov 24 11:41:07 crc kubenswrapper[4678]: I1124 11:41:07.519278 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.862330424 podStartE2EDuration="7.519255315s" podCreationTimestamp="2025-11-24 11:41:00 +0000 UTC" firstStartedPulling="2025-11-24 11:41:01.354686118 +0000 UTC m=+1472.285745747" lastFinishedPulling="2025-11-24 11:41:06.011610999 +0000 UTC m=+1476.942670638" observedRunningTime="2025-11-24 11:41:07.50711593 +0000 UTC m=+1478.438175569" watchObservedRunningTime="2025-11-24 11:41:07.519255315 +0000 UTC m=+1478.450314954" Nov 24 11:41:07 crc kubenswrapper[4678]: I1124 11:41:07.538446 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cbql5" podStartSLOduration=5.550468868 podStartE2EDuration="10.538424248s" podCreationTimestamp="2025-11-24 11:40:57 +0000 UTC" firstStartedPulling="2025-11-24 11:41:01.188800334 +0000 UTC m=+1472.119859973" lastFinishedPulling="2025-11-24 11:41:06.176755714 +0000 UTC m=+1477.107815353" observedRunningTime="2025-11-24 11:41:07.526797536 +0000 UTC m=+1478.457857195" watchObservedRunningTime="2025-11-24 11:41:07.538424248 +0000 UTC m=+1478.469483887" Nov 24 11:41:08 crc kubenswrapper[4678]: I1124 11:41:08.025899 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:41:08 crc kubenswrapper[4678]: I1124 11:41:08.026009 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:41:08 crc kubenswrapper[4678]: I1124 11:41:08.260772 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:41:09 crc kubenswrapper[4678]: I1124 11:41:09.098101 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cbql5" podUID="b01e9d21-3cb3-4994-b184-c76a6e283ccb" containerName="registry-server" probeResult="failure" output=< Nov 24 11:41:09 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 11:41:09 crc kubenswrapper[4678]: > Nov 24 11:41:09 crc kubenswrapper[4678]: I1124 11:41:09.499696 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="ceilometer-central-agent" containerID="cri-o://9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb" gracePeriod=30 Nov 24 11:41:09 crc kubenswrapper[4678]: I1124 11:41:09.499745 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="ceilometer-notification-agent" containerID="cri-o://97d2ad02d3be40e19c8d32cdb8b0a6e987f6102ef0c0415257ea66575c45a39a" gracePeriod=30 Nov 24 11:41:09 crc kubenswrapper[4678]: I1124 11:41:09.499746 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="proxy-httpd" containerID="cri-o://ee42929e4c5a9e2365889902090ca7b04f8252cabe7d098abb6b69b4ee771a8b" gracePeriod=30 Nov 24 11:41:09 crc kubenswrapper[4678]: I1124 11:41:09.499754 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="sg-core" containerID="cri-o://c934ed7087f37a052070a27e281cebdc9fab1860d6ded382b7aba605e67bb07a" gracePeriod=30 Nov 24 11:41:10 crc kubenswrapper[4678]: I1124 11:41:10.319643 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" containerName="rabbitmq" containerID="cri-o://d305f097289a80687334143eb9411e020d57ca5b69dadc8b47b0fda3a754ccc7" gracePeriod=604796 Nov 24 11:41:10 crc kubenswrapper[4678]: I1124 11:41:10.368190 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t7wls" podUID="a785375f-ace8-49dd-be97-c175855a2ecd" containerName="registry-server" probeResult="failure" output=< Nov 24 11:41:10 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 11:41:10 crc kubenswrapper[4678]: > Nov 24 11:41:10 crc kubenswrapper[4678]: I1124 11:41:10.517849 4678 generic.go:334] "Generic (PLEG): container finished" podID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerID="ee42929e4c5a9e2365889902090ca7b04f8252cabe7d098abb6b69b4ee771a8b" exitCode=0 Nov 24 11:41:10 crc kubenswrapper[4678]: I1124 11:41:10.518088 4678 generic.go:334] "Generic (PLEG): container finished" podID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerID="c934ed7087f37a052070a27e281cebdc9fab1860d6ded382b7aba605e67bb07a" exitCode=2 Nov 24 11:41:10 crc kubenswrapper[4678]: I1124 11:41:10.518176 4678 generic.go:334] "Generic (PLEG): container finished" podID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerID="97d2ad02d3be40e19c8d32cdb8b0a6e987f6102ef0c0415257ea66575c45a39a" exitCode=0 Nov 24 11:41:10 crc kubenswrapper[4678]: I1124 11:41:10.517940 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94757caa-5918-4bc8-89c0-587e0cafd70c","Type":"ContainerDied","Data":"ee42929e4c5a9e2365889902090ca7b04f8252cabe7d098abb6b69b4ee771a8b"} Nov 24 11:41:10 crc kubenswrapper[4678]: I1124 11:41:10.518362 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94757caa-5918-4bc8-89c0-587e0cafd70c","Type":"ContainerDied","Data":"c934ed7087f37a052070a27e281cebdc9fab1860d6ded382b7aba605e67bb07a"} Nov 24 11:41:10 crc kubenswrapper[4678]: I1124 11:41:10.518460 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94757caa-5918-4bc8-89c0-587e0cafd70c","Type":"ContainerDied","Data":"97d2ad02d3be40e19c8d32cdb8b0a6e987f6102ef0c0415257ea66575c45a39a"} Nov 24 11:41:10 crc kubenswrapper[4678]: I1124 11:41:10.548732 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="728e8f13-52c5-4b48-9fff-8053732311b9" containerName="rabbitmq" containerID="cri-o://8a8cf707155e80e1af5fc5b42d9d80b457334efc638ac5d7a6c2f840eb749a1b" gracePeriod=604795 Nov 24 11:41:10 crc kubenswrapper[4678]: I1124 11:41:10.873406 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4k52z" Nov 24 11:41:10 crc kubenswrapper[4678]: I1124 11:41:10.873989 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4k52z" Nov 24 11:41:11 crc kubenswrapper[4678]: I1124 11:41:11.896214 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:41:11 crc kubenswrapper[4678]: E1124 11:41:11.896651 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:41:11 crc kubenswrapper[4678]: I1124 11:41:11.942659 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4k52z" podUID="92e69f8c-3e27-40e9-9745-58c570b67749" containerName="registry-server" probeResult="failure" output=< Nov 24 11:41:11 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 11:41:11 crc kubenswrapper[4678]: > Nov 24 11:41:12 crc kubenswrapper[4678]: I1124 11:41:12.906319 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="728e8f13-52c5-4b48-9fff-8053732311b9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 24 11:41:13 crc kubenswrapper[4678]: I1124 11:41:13.289284 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Nov 24 11:41:16 crc kubenswrapper[4678]: I1124 11:41:16.591247 4678 generic.go:334] "Generic (PLEG): container finished" podID="5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" containerID="d305f097289a80687334143eb9411e020d57ca5b69dadc8b47b0fda3a754ccc7" exitCode=0 Nov 24 11:41:16 crc kubenswrapper[4678]: I1124 11:41:16.591335 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6","Type":"ContainerDied","Data":"d305f097289a80687334143eb9411e020d57ca5b69dadc8b47b0fda3a754ccc7"} Nov 24 11:41:17 crc kubenswrapper[4678]: E1124 11:41:17.033437 4678 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod728e8f13_52c5_4b48_9fff_8053732311b9.slice/crio-conmon-8a8cf707155e80e1af5fc5b42d9d80b457334efc638ac5d7a6c2f840eb749a1b.scope\": RecentStats: unable to find data in memory cache]" Nov 24 11:41:17 crc kubenswrapper[4678]: I1124 11:41:17.608602 4678 generic.go:334] "Generic (PLEG): container finished" podID="728e8f13-52c5-4b48-9fff-8053732311b9" containerID="8a8cf707155e80e1af5fc5b42d9d80b457334efc638ac5d7a6c2f840eb749a1b" exitCode=0 Nov 24 11:41:17 crc kubenswrapper[4678]: I1124 11:41:17.609359 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"728e8f13-52c5-4b48-9fff-8053732311b9","Type":"ContainerDied","Data":"8a8cf707155e80e1af5fc5b42d9d80b457334efc638ac5d7a6c2f840eb749a1b"} Nov 24 11:41:18 crc kubenswrapper[4678]: I1124 11:41:18.104409 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:41:18 crc kubenswrapper[4678]: I1124 11:41:18.177902 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:41:18 crc kubenswrapper[4678]: I1124 11:41:18.352022 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cbql5"] Nov 24 11:41:19 crc kubenswrapper[4678]: I1124 11:41:19.649545 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cbql5" podUID="b01e9d21-3cb3-4994-b184-c76a6e283ccb" containerName="registry-server" containerID="cri-o://b6d78dad636f4207ab4f150e1fe68bf9bd8bd3c5b9e4638f17dc940839b8637e" gracePeriod=2 Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.256007 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.287783 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/728e8f13-52c5-4b48-9fff-8053732311b9-erlang-cookie-secret\") pod \"728e8f13-52c5-4b48-9fff-8053732311b9\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.287890 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-server-conf\") pod \"728e8f13-52c5-4b48-9fff-8053732311b9\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.287943 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-plugins-conf\") pod \"728e8f13-52c5-4b48-9fff-8053732311b9\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.287981 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-tls\") pod \"728e8f13-52c5-4b48-9fff-8053732311b9\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.288015 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-erlang-cookie\") pod \"728e8f13-52c5-4b48-9fff-8053732311b9\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.288035 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/728e8f13-52c5-4b48-9fff-8053732311b9-pod-info\") pod \"728e8f13-52c5-4b48-9fff-8053732311b9\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.288074 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"728e8f13-52c5-4b48-9fff-8053732311b9\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.288098 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-confd\") pod \"728e8f13-52c5-4b48-9fff-8053732311b9\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.288166 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-plugins\") pod \"728e8f13-52c5-4b48-9fff-8053732311b9\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.288191 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k96n\" (UniqueName: \"kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-kube-api-access-7k96n\") pod \"728e8f13-52c5-4b48-9fff-8053732311b9\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.288271 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-config-data\") pod \"728e8f13-52c5-4b48-9fff-8053732311b9\" (UID: \"728e8f13-52c5-4b48-9fff-8053732311b9\") " Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.292998 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "728e8f13-52c5-4b48-9fff-8053732311b9" (UID: "728e8f13-52c5-4b48-9fff-8053732311b9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.294028 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "728e8f13-52c5-4b48-9fff-8053732311b9" (UID: "728e8f13-52c5-4b48-9fff-8053732311b9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.300288 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "728e8f13-52c5-4b48-9fff-8053732311b9" (UID: "728e8f13-52c5-4b48-9fff-8053732311b9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.310300 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-kube-api-access-7k96n" (OuterVolumeSpecName: "kube-api-access-7k96n") pod "728e8f13-52c5-4b48-9fff-8053732311b9" (UID: "728e8f13-52c5-4b48-9fff-8053732311b9"). InnerVolumeSpecName "kube-api-access-7k96n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.313104 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/728e8f13-52c5-4b48-9fff-8053732311b9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "728e8f13-52c5-4b48-9fff-8053732311b9" (UID: "728e8f13-52c5-4b48-9fff-8053732311b9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.313767 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "728e8f13-52c5-4b48-9fff-8053732311b9" (UID: "728e8f13-52c5-4b48-9fff-8053732311b9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.316747 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/728e8f13-52c5-4b48-9fff-8053732311b9-pod-info" (OuterVolumeSpecName: "pod-info") pod "728e8f13-52c5-4b48-9fff-8053732311b9" (UID: "728e8f13-52c5-4b48-9fff-8053732311b9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.326271 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "728e8f13-52c5-4b48-9fff-8053732311b9" (UID: "728e8f13-52c5-4b48-9fff-8053732311b9"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.394143 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t7wls" podUID="a785375f-ace8-49dd-be97-c175855a2ecd" containerName="registry-server" probeResult="failure" output=< Nov 24 11:41:20 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 11:41:20 crc kubenswrapper[4678]: > Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.399303 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-config-data" (OuterVolumeSpecName: "config-data") pod "728e8f13-52c5-4b48-9fff-8053732311b9" (UID: "728e8f13-52c5-4b48-9fff-8053732311b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.414046 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.414129 4678 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/728e8f13-52c5-4b48-9fff-8053732311b9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.414144 4678 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.414157 4678 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.414199 4678 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.414212 4678 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/728e8f13-52c5-4b48-9fff-8053732311b9-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.414300 4678 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.414316 4678 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.414328 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k96n\" (UniqueName: \"kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-kube-api-access-7k96n\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.541012 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-server-conf" (OuterVolumeSpecName: "server-conf") pod "728e8f13-52c5-4b48-9fff-8053732311b9" (UID: "728e8f13-52c5-4b48-9fff-8053732311b9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.577353 4678 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.629284 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "728e8f13-52c5-4b48-9fff-8053732311b9" (UID: "728e8f13-52c5-4b48-9fff-8053732311b9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.638345 4678 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/728e8f13-52c5-4b48-9fff-8053732311b9-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.638380 4678 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.638391 4678 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/728e8f13-52c5-4b48-9fff-8053732311b9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.676200 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"728e8f13-52c5-4b48-9fff-8053732311b9","Type":"ContainerDied","Data":"f8306af5af9b53e6fb9823ee55cf4f8752b13c03fd2aa4451658bc979e213b5f"} Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.676275 4678 scope.go:117] "RemoveContainer" containerID="8a8cf707155e80e1af5fc5b42d9d80b457334efc638ac5d7a6c2f840eb749a1b" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.676784 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.696362 4678 generic.go:334] "Generic (PLEG): container finished" podID="b01e9d21-3cb3-4994-b184-c76a6e283ccb" containerID="b6d78dad636f4207ab4f150e1fe68bf9bd8bd3c5b9e4638f17dc940839b8637e" exitCode=0 Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.696411 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbql5" event={"ID":"b01e9d21-3cb3-4994-b184-c76a6e283ccb","Type":"ContainerDied","Data":"b6d78dad636f4207ab4f150e1fe68bf9bd8bd3c5b9e4638f17dc940839b8637e"} Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.726991 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.749603 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.760498 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:41:20 crc kubenswrapper[4678]: E1124 11:41:20.761306 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="728e8f13-52c5-4b48-9fff-8053732311b9" containerName="setup-container" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.761335 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="728e8f13-52c5-4b48-9fff-8053732311b9" containerName="setup-container" Nov 24 11:41:20 crc kubenswrapper[4678]: E1124 11:41:20.761352 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="728e8f13-52c5-4b48-9fff-8053732311b9" containerName="rabbitmq" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.761360 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="728e8f13-52c5-4b48-9fff-8053732311b9" containerName="rabbitmq" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.761681 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="728e8f13-52c5-4b48-9fff-8053732311b9" containerName="rabbitmq" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.763544 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.768414 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.768552 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.770006 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.770083 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.770191 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.770214 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.770405 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-srnh8" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.799766 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.855135 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/87e447ce-94b3-4e59-a513-fec289651bd6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.855202 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/87e447ce-94b3-4e59-a513-fec289651bd6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.855245 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhw6x\" (UniqueName: \"kubernetes.io/projected/87e447ce-94b3-4e59-a513-fec289651bd6-kube-api-access-vhw6x\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.855329 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87e447ce-94b3-4e59-a513-fec289651bd6-config-data\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.855369 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/87e447ce-94b3-4e59-a513-fec289651bd6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.855411 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/87e447ce-94b3-4e59-a513-fec289651bd6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.855496 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/87e447ce-94b3-4e59-a513-fec289651bd6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.855551 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/87e447ce-94b3-4e59-a513-fec289651bd6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.855606 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/87e447ce-94b3-4e59-a513-fec289651bd6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.855653 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.855718 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/87e447ce-94b3-4e59-a513-fec289651bd6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.946579 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4k52z" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.961325 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.961429 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/87e447ce-94b3-4e59-a513-fec289651bd6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.961545 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/87e447ce-94b3-4e59-a513-fec289651bd6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.961578 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/87e447ce-94b3-4e59-a513-fec289651bd6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.961615 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhw6x\" (UniqueName: \"kubernetes.io/projected/87e447ce-94b3-4e59-a513-fec289651bd6-kube-api-access-vhw6x\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.961706 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87e447ce-94b3-4e59-a513-fec289651bd6-config-data\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.961741 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/87e447ce-94b3-4e59-a513-fec289651bd6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.961787 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/87e447ce-94b3-4e59-a513-fec289651bd6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.961833 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/87e447ce-94b3-4e59-a513-fec289651bd6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.961886 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/87e447ce-94b3-4e59-a513-fec289651bd6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.961943 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/87e447ce-94b3-4e59-a513-fec289651bd6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.962905 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/87e447ce-94b3-4e59-a513-fec289651bd6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.963134 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.970624 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87e447ce-94b3-4e59-a513-fec289651bd6-config-data\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.971729 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/87e447ce-94b3-4e59-a513-fec289651bd6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.973644 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/87e447ce-94b3-4e59-a513-fec289651bd6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.976153 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/87e447ce-94b3-4e59-a513-fec289651bd6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.982876 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/87e447ce-94b3-4e59-a513-fec289651bd6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:20 crc kubenswrapper[4678]: I1124 11:41:20.987730 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/87e447ce-94b3-4e59-a513-fec289651bd6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:20.999647 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/87e447ce-94b3-4e59-a513-fec289651bd6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.005651 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/87e447ce-94b3-4e59-a513-fec289651bd6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.010507 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhw6x\" (UniqueName: \"kubernetes.io/projected/87e447ce-94b3-4e59-a513-fec289651bd6-kube-api-access-vhw6x\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.042026 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4k52z" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.088901 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"87e447ce-94b3-4e59-a513-fec289651bd6\") " pod="openstack/rabbitmq-server-0" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.114312 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.175234 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68df85789f-97gfc"] Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.188303 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.199865 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.270871 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-97gfc"] Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.306498 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-config\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.306590 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.306785 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.306815 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwvth\" (UniqueName: \"kubernetes.io/projected/f0343680-7657-4ef3-b7aa-3d56d1f4090f-kube-api-access-xwvth\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.307255 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-dns-svc\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.307333 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.307547 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.410931 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-dns-svc\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.411115 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.411183 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.411260 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-config\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.411300 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.411400 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.411429 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwvth\" (UniqueName: \"kubernetes.io/projected/f0343680-7657-4ef3-b7aa-3d56d1f4090f-kube-api-access-xwvth\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.413213 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.413413 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-config\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.413437 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.414166 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.414168 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-dns-svc\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.414207 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.435380 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwvth\" (UniqueName: \"kubernetes.io/projected/f0343680-7657-4ef3-b7aa-3d56d1f4090f-kube-api-access-xwvth\") pod \"dnsmasq-dns-68df85789f-97gfc\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.540085 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.574218 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4k52z"] Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.764001 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cr22z"] Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.764353 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cr22z" podUID="b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49" containerName="registry-server" containerID="cri-o://2164ca1b6532dd53604245f66c8c2fd323373422b957a333eed670a45196c7e1" gracePeriod=2 Nov 24 11:41:21 crc kubenswrapper[4678]: I1124 11:41:21.939104 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="728e8f13-52c5-4b48-9fff-8053732311b9" path="/var/lib/kubelet/pods/728e8f13-52c5-4b48-9fff-8053732311b9/volumes" Nov 24 11:41:22 crc kubenswrapper[4678]: I1124 11:41:22.738561 4678 generic.go:334] "Generic (PLEG): container finished" podID="b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49" containerID="2164ca1b6532dd53604245f66c8c2fd323373422b957a333eed670a45196c7e1" exitCode=0 Nov 24 11:41:22 crc kubenswrapper[4678]: I1124 11:41:22.738637 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr22z" event={"ID":"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49","Type":"ContainerDied","Data":"2164ca1b6532dd53604245f66c8c2fd323373422b957a333eed670a45196c7e1"} Nov 24 11:41:24 crc kubenswrapper[4678]: I1124 11:41:24.896638 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:41:24 crc kubenswrapper[4678]: E1124 11:41:24.897465 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:41:25 crc kubenswrapper[4678]: E1124 11:41:25.266008 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2164ca1b6532dd53604245f66c8c2fd323373422b957a333eed670a45196c7e1 is running failed: container process not found" containerID="2164ca1b6532dd53604245f66c8c2fd323373422b957a333eed670a45196c7e1" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 11:41:25 crc kubenswrapper[4678]: E1124 11:41:25.266701 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2164ca1b6532dd53604245f66c8c2fd323373422b957a333eed670a45196c7e1 is running failed: container process not found" containerID="2164ca1b6532dd53604245f66c8c2fd323373422b957a333eed670a45196c7e1" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 11:41:25 crc kubenswrapper[4678]: E1124 11:41:25.267104 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2164ca1b6532dd53604245f66c8c2fd323373422b957a333eed670a45196c7e1 is running failed: container process not found" containerID="2164ca1b6532dd53604245f66c8c2fd323373422b957a333eed670a45196c7e1" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 11:41:25 crc kubenswrapper[4678]: E1124 11:41:25.267161 4678 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2164ca1b6532dd53604245f66c8c2fd323373422b957a333eed670a45196c7e1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-cr22z" podUID="b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49" containerName="registry-server" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.202377 4678 scope.go:117] "RemoveContainer" containerID="3a7b5ef4c4fa5ee85ae38f98dba7ea094ecd28d33191e8a701dfe02bc4368e70" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.399465 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.424115 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:41:27 crc kubenswrapper[4678]: E1124 11:41:27.512319 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 24 11:41:27 crc kubenswrapper[4678]: E1124 11:41:27.512380 4678 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 24 11:41:27 crc kubenswrapper[4678]: E1124 11:41:27.512535 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cz6mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-crx7v_openstack(7e7fab76-c5f4-450f-be9b-d433395cbcf3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 11:41:27 crc kubenswrapper[4678]: E1124 11:41:27.514600 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-crx7v" podUID="7e7fab76-c5f4-450f-be9b-d433395cbcf3" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.597469 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt6zc\" (UniqueName: \"kubernetes.io/projected/b01e9d21-3cb3-4994-b184-c76a6e283ccb-kube-api-access-dt6zc\") pod \"b01e9d21-3cb3-4994-b184-c76a6e283ccb\" (UID: \"b01e9d21-3cb3-4994-b184-c76a6e283ccb\") " Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.600016 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-plugins\") pod \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.600079 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfd85\" (UniqueName: \"kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-kube-api-access-wfd85\") pod \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.600135 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01e9d21-3cb3-4994-b184-c76a6e283ccb-catalog-content\") pod \"b01e9d21-3cb3-4994-b184-c76a6e283ccb\" (UID: \"b01e9d21-3cb3-4994-b184-c76a6e283ccb\") " Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.600168 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-erlang-cookie\") pod \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.600255 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-erlang-cookie-secret\") pod \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.600285 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-server-conf\") pod \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.600404 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01e9d21-3cb3-4994-b184-c76a6e283ccb-utilities\") pod \"b01e9d21-3cb3-4994-b184-c76a6e283ccb\" (UID: \"b01e9d21-3cb3-4994-b184-c76a6e283ccb\") " Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.600430 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-config-data\") pod \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.600464 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-plugins-conf\") pod \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.600497 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-confd\") pod \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.600517 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.600540 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-tls\") pod \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.600610 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-pod-info\") pod \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\" (UID: \"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6\") " Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.601294 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" (UID: "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.601820 4678 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.606654 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b01e9d21-3cb3-4994-b184-c76a6e283ccb-utilities" (OuterVolumeSpecName: "utilities") pod "b01e9d21-3cb3-4994-b184-c76a6e283ccb" (UID: "b01e9d21-3cb3-4994-b184-c76a6e283ccb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.608485 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" (UID: "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.610828 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" (UID: "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.647859 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b01e9d21-3cb3-4994-b184-c76a6e283ccb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b01e9d21-3cb3-4994-b184-c76a6e283ccb" (UID: "b01e9d21-3cb3-4994-b184-c76a6e283ccb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.655831 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b01e9d21-3cb3-4994-b184-c76a6e283ccb-kube-api-access-dt6zc" (OuterVolumeSpecName: "kube-api-access-dt6zc") pod "b01e9d21-3cb3-4994-b184-c76a6e283ccb" (UID: "b01e9d21-3cb3-4994-b184-c76a6e283ccb"). InnerVolumeSpecName "kube-api-access-dt6zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.656177 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" (UID: "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.657124 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" (UID: "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.664221 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-config-data" (OuterVolumeSpecName: "config-data") pod "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" (UID: "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.668877 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-kube-api-access-wfd85" (OuterVolumeSpecName: "kube-api-access-wfd85") pod "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" (UID: "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6"). InnerVolumeSpecName "kube-api-access-wfd85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.669015 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-pod-info" (OuterVolumeSpecName: "pod-info") pod "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" (UID: "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.682660 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" (UID: "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.704395 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt6zc\" (UniqueName: \"kubernetes.io/projected/b01e9d21-3cb3-4994-b184-c76a6e283ccb-kube-api-access-dt6zc\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.704432 4678 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.704442 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfd85\" (UniqueName: \"kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-kube-api-access-wfd85\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.704453 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01e9d21-3cb3-4994-b184-c76a6e283ccb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.704463 4678 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.704472 4678 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.704481 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01e9d21-3cb3-4994-b184-c76a6e283ccb-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.704493 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.704523 4678 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.704531 4678 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.704542 4678 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.742016 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-server-conf" (OuterVolumeSpecName: "server-conf") pod "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" (UID: "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.781919 4678 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.819460 4678 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.819492 4678 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.936712 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbql5" event={"ID":"b01e9d21-3cb3-4994-b184-c76a6e283ccb","Type":"ContainerDied","Data":"d4e90fd7d98e9f607415c7995f322e53bd60356b2a1e064565bd9d0ca818431b"} Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.936769 4678 scope.go:117] "RemoveContainer" containerID="b6d78dad636f4207ab4f150e1fe68bf9bd8bd3c5b9e4638f17dc940839b8637e" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.936920 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbql5" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.955803 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.965142 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a00cc4f-4f88-4e53-b77c-3c94a2614ff6","Type":"ContainerDied","Data":"ed166c84ee9e1a2992d93fde899118b8229afb9b6a3f2724184ae772123537ab"} Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.965184 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:27 crc kubenswrapper[4678]: E1124 11:41:27.967155 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-crx7v" podUID="7e7fab76-c5f4-450f-be9b-d433395cbcf3" Nov 24 11:41:27 crc kubenswrapper[4678]: I1124 11:41:27.980038 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.027210 4678 scope.go:117] "RemoveContainer" containerID="84027399cab545bea6758e80ec59655c6d417fad0c0ff94a1e6e4cef81646603" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.108348 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" (UID: "5a00cc4f-4f88-4e53-b77c-3c94a2614ff6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.124090 4678 scope.go:117] "RemoveContainer" containerID="837e763feff0a16f91cd09a84e8eedc9d02e7a4289a5756cbfb5e69e0608b336" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.152274 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-utilities\") pod \"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49\" (UID: \"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49\") " Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.152367 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8rbr\" (UniqueName: \"kubernetes.io/projected/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-kube-api-access-m8rbr\") pod \"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49\" (UID: \"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49\") " Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.152548 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-catalog-content\") pod \"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49\" (UID: \"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49\") " Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.154374 4678 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.155725 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-utilities" (OuterVolumeSpecName: "utilities") pod "b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49" (UID: "b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.156649 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cbql5"] Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.160122 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-kube-api-access-m8rbr" (OuterVolumeSpecName: "kube-api-access-m8rbr") pod "b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49" (UID: "b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49"). InnerVolumeSpecName "kube-api-access-m8rbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.196825 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cbql5"] Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.212808 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-97gfc"] Nov 24 11:41:28 crc kubenswrapper[4678]: W1124 11:41:28.223901 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0343680_7657_4ef3_b7aa_3d56d1f4090f.slice/crio-4c3264406c81ef0389c7bfd334b9076434b7048efd5e8d35f9dcae0d3e4e8c66 WatchSource:0}: Error finding container 4c3264406c81ef0389c7bfd334b9076434b7048efd5e8d35f9dcae0d3e4e8c66: Status 404 returned error can't find the container with id 4c3264406c81ef0389c7bfd334b9076434b7048efd5e8d35f9dcae0d3e4e8c66 Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.254004 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49" (UID: "b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.256436 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.256645 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.256739 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8rbr\" (UniqueName: \"kubernetes.io/projected/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49-kube-api-access-m8rbr\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.290041 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: i/o timeout" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.467905 4678 scope.go:117] "RemoveContainer" containerID="d305f097289a80687334143eb9411e020d57ca5b69dadc8b47b0fda3a754ccc7" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.512536 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.516806 4678 scope.go:117] "RemoveContainer" containerID="78a42a92af69cea2096a817c36fa21b3dd0f79b6d7fef3c6e4842c308a764028" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.527758 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.542547 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:41:28 crc kubenswrapper[4678]: E1124 11:41:28.543109 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49" containerName="extract-content" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.543127 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49" containerName="extract-content" Nov 24 11:41:28 crc kubenswrapper[4678]: E1124 11:41:28.543138 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49" containerName="registry-server" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.543145 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49" containerName="registry-server" Nov 24 11:41:28 crc kubenswrapper[4678]: E1124 11:41:28.543183 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" containerName="rabbitmq" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.543192 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" containerName="rabbitmq" Nov 24 11:41:28 crc kubenswrapper[4678]: E1124 11:41:28.543213 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01e9d21-3cb3-4994-b184-c76a6e283ccb" containerName="extract-content" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.543218 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01e9d21-3cb3-4994-b184-c76a6e283ccb" containerName="extract-content" Nov 24 11:41:28 crc kubenswrapper[4678]: E1124 11:41:28.543232 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" containerName="setup-container" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.543239 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" containerName="setup-container" Nov 24 11:41:28 crc kubenswrapper[4678]: E1124 11:41:28.543254 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01e9d21-3cb3-4994-b184-c76a6e283ccb" containerName="extract-utilities" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.543261 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01e9d21-3cb3-4994-b184-c76a6e283ccb" containerName="extract-utilities" Nov 24 11:41:28 crc kubenswrapper[4678]: E1124 11:41:28.543272 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01e9d21-3cb3-4994-b184-c76a6e283ccb" containerName="registry-server" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.543278 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01e9d21-3cb3-4994-b184-c76a6e283ccb" containerName="registry-server" Nov 24 11:41:28 crc kubenswrapper[4678]: E1124 11:41:28.543284 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49" containerName="extract-utilities" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.543290 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49" containerName="extract-utilities" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.543542 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49" containerName="registry-server" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.543558 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="b01e9d21-3cb3-4994-b184-c76a6e283ccb" containerName="registry-server" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.543569 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" containerName="rabbitmq" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.545192 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.549105 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.550408 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wmvgb" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.550537 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.550662 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.550846 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.551093 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.551245 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.554095 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.577311 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.577390 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.577417 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.577452 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.577479 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.577639 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.577661 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.577731 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhgnq\" (UniqueName: \"kubernetes.io/projected/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-kube-api-access-nhgnq\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.577775 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.577850 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.577921 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.679511 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhgnq\" (UniqueName: \"kubernetes.io/projected/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-kube-api-access-nhgnq\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.679578 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.679632 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.679734 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.680030 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.680122 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.680164 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.680206 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.680253 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.680501 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.680527 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.680762 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.680791 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.681445 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.681629 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.682300 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.682777 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.685913 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.686791 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.686822 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.696452 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.707179 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhgnq\" (UniqueName: \"kubernetes.io/projected/2b3ff76d-79e0-4f90-8b4a-7763c3ca8167-kube-api-access-nhgnq\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.742352 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.886359 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.992033 4678 generic.go:334] "Generic (PLEG): container finished" podID="f0343680-7657-4ef3-b7aa-3d56d1f4090f" containerID="51df9133d6e723704e2afaf1503809fac91dc80180410c0663601eebb624464a" exitCode=0 Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.992098 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-97gfc" event={"ID":"f0343680-7657-4ef3-b7aa-3d56d1f4090f","Type":"ContainerDied","Data":"51df9133d6e723704e2afaf1503809fac91dc80180410c0663601eebb624464a"} Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.992384 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-97gfc" event={"ID":"f0343680-7657-4ef3-b7aa-3d56d1f4090f","Type":"ContainerStarted","Data":"4c3264406c81ef0389c7bfd334b9076434b7048efd5e8d35f9dcae0d3e4e8c66"} Nov 24 11:41:28 crc kubenswrapper[4678]: I1124 11:41:28.997861 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"87e447ce-94b3-4e59-a513-fec289651bd6","Type":"ContainerStarted","Data":"ed4082addcafdd754134f9040eceb380e8b21a8c9168e433df70bfc8d9533e5e"} Nov 24 11:41:29 crc kubenswrapper[4678]: I1124 11:41:29.007069 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr22z" event={"ID":"b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49","Type":"ContainerDied","Data":"e79a7d58297ca26ae253047b5b29ae76568dc76730b3592f192e5465bda2a391"} Nov 24 11:41:29 crc kubenswrapper[4678]: I1124 11:41:29.007125 4678 scope.go:117] "RemoveContainer" containerID="2164ca1b6532dd53604245f66c8c2fd323373422b957a333eed670a45196c7e1" Nov 24 11:41:29 crc kubenswrapper[4678]: I1124 11:41:29.007269 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cr22z" Nov 24 11:41:29 crc kubenswrapper[4678]: I1124 11:41:29.060654 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cr22z"] Nov 24 11:41:29 crc kubenswrapper[4678]: I1124 11:41:29.068591 4678 scope.go:117] "RemoveContainer" containerID="71ec58f597d04c51b59e1904f8ce8ed066fda9f01074bcc32ae1073d866f3da8" Nov 24 11:41:29 crc kubenswrapper[4678]: I1124 11:41:29.072552 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cr22z"] Nov 24 11:41:29 crc kubenswrapper[4678]: I1124 11:41:29.109607 4678 scope.go:117] "RemoveContainer" containerID="44c10337a75c885a001aca2a011c1ce20a23e79a6b3baf17c925363f943f366d" Nov 24 11:41:29 crc kubenswrapper[4678]: I1124 11:41:29.431483 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 11:41:29 crc kubenswrapper[4678]: W1124 11:41:29.482043 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b3ff76d_79e0_4f90_8b4a_7763c3ca8167.slice/crio-82c4c145dd9495968f9a61c897534a152c88fc4fd8900e747b84a2f3c1aa7144 WatchSource:0}: Error finding container 82c4c145dd9495968f9a61c897534a152c88fc4fd8900e747b84a2f3c1aa7144: Status 404 returned error can't find the container with id 82c4c145dd9495968f9a61c897534a152c88fc4fd8900e747b84a2f3c1aa7144 Nov 24 11:41:29 crc kubenswrapper[4678]: I1124 11:41:29.912737 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a00cc4f-4f88-4e53-b77c-3c94a2614ff6" path="/var/lib/kubelet/pods/5a00cc4f-4f88-4e53-b77c-3c94a2614ff6/volumes" Nov 24 11:41:29 crc kubenswrapper[4678]: I1124 11:41:29.917052 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b01e9d21-3cb3-4994-b184-c76a6e283ccb" path="/var/lib/kubelet/pods/b01e9d21-3cb3-4994-b184-c76a6e283ccb/volumes" Nov 24 11:41:29 crc kubenswrapper[4678]: I1124 11:41:29.917818 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49" path="/var/lib/kubelet/pods/b4caa21e-16d1-4ea3-94ac-5ce0eb7bfe49/volumes" Nov 24 11:41:30 crc kubenswrapper[4678]: I1124 11:41:30.025272 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-97gfc" event={"ID":"f0343680-7657-4ef3-b7aa-3d56d1f4090f","Type":"ContainerStarted","Data":"01b99ea84a30295e685723e5fb6d9965b87751ea45c50cc4117faee5f1230cb7"} Nov 24 11:41:30 crc kubenswrapper[4678]: I1124 11:41:30.026521 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:30 crc kubenswrapper[4678]: I1124 11:41:30.034534 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167","Type":"ContainerStarted","Data":"82c4c145dd9495968f9a61c897534a152c88fc4fd8900e747b84a2f3c1aa7144"} Nov 24 11:41:30 crc kubenswrapper[4678]: I1124 11:41:30.056448 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68df85789f-97gfc" podStartSLOduration=9.056428501 podStartE2EDuration="9.056428501s" podCreationTimestamp="2025-11-24 11:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:41:30.047637515 +0000 UTC m=+1500.978697164" watchObservedRunningTime="2025-11-24 11:41:30.056428501 +0000 UTC m=+1500.987488140" Nov 24 11:41:30 crc kubenswrapper[4678]: I1124 11:41:30.326608 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t7wls" podUID="a785375f-ace8-49dd-be97-c175855a2ecd" containerName="registry-server" probeResult="failure" output=< Nov 24 11:41:30 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 11:41:30 crc kubenswrapper[4678]: > Nov 24 11:41:30 crc kubenswrapper[4678]: I1124 11:41:30.883832 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.1.7:3000/\": dial tcp 10.217.1.7:3000: connect: connection refused" Nov 24 11:41:31 crc kubenswrapper[4678]: I1124 11:41:31.046934 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"87e447ce-94b3-4e59-a513-fec289651bd6","Type":"ContainerStarted","Data":"12f98ab6b8aa344c386213c48f9adf332a13db5a8010f8f98e911e4b6afa7031"} Nov 24 11:41:32 crc kubenswrapper[4678]: I1124 11:41:32.059190 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167","Type":"ContainerStarted","Data":"c359de690a8026a365d4ab0ce340f63f3319e716081723ce9032acf33912ca0f"} Nov 24 11:41:36 crc kubenswrapper[4678]: I1124 11:41:36.541853 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:36 crc kubenswrapper[4678]: I1124 11:41:36.614105 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-qlcnq"] Nov 24 11:41:36 crc kubenswrapper[4678]: I1124 11:41:36.614598 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" podUID="d024ef08-351c-46f1-a000-8e6803d52572" containerName="dnsmasq-dns" containerID="cri-o://ac880874918e97a5bcb7ccd306cc6c3909f3c1d4d60dd3b522a96b77c4574fe7" gracePeriod=10 Nov 24 11:41:36 crc kubenswrapper[4678]: I1124 11:41:36.820781 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-lsbwn"] Nov 24 11:41:36 crc kubenswrapper[4678]: I1124 11:41:36.824310 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:36 crc kubenswrapper[4678]: I1124 11:41:36.848334 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-lsbwn"] Nov 24 11:41:36 crc kubenswrapper[4678]: I1124 11:41:36.980827 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:36 crc kubenswrapper[4678]: I1124 11:41:36.980924 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-dns-svc\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:36 crc kubenswrapper[4678]: I1124 11:41:36.980951 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:36 crc kubenswrapper[4678]: I1124 11:41:36.980976 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-config\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:36 crc kubenswrapper[4678]: I1124 11:41:36.981038 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:36 crc kubenswrapper[4678]: I1124 11:41:36.981064 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:36 crc kubenswrapper[4678]: I1124 11:41:36.981111 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djtzc\" (UniqueName: \"kubernetes.io/projected/79679ecc-800f-4387-8516-8fb01f65610b-kube-api-access-djtzc\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.089486 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-dns-svc\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.089534 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.089564 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-config\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.089630 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.089652 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.089709 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djtzc\" (UniqueName: \"kubernetes.io/projected/79679ecc-800f-4387-8516-8fb01f65610b-kube-api-access-djtzc\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.089812 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.090646 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.090646 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-config\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.091220 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.091538 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-dns-svc\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.091792 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.092196 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79679ecc-800f-4387-8516-8fb01f65610b-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.111459 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djtzc\" (UniqueName: \"kubernetes.io/projected/79679ecc-800f-4387-8516-8fb01f65610b-kube-api-access-djtzc\") pod \"dnsmasq-dns-bb85b8995-lsbwn\" (UID: \"79679ecc-800f-4387-8516-8fb01f65610b\") " pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.155844 4678 generic.go:334] "Generic (PLEG): container finished" podID="d024ef08-351c-46f1-a000-8e6803d52572" containerID="ac880874918e97a5bcb7ccd306cc6c3909f3c1d4d60dd3b522a96b77c4574fe7" exitCode=0 Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.155962 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" event={"ID":"d024ef08-351c-46f1-a000-8e6803d52572","Type":"ContainerDied","Data":"ac880874918e97a5bcb7ccd306cc6c3909f3c1d4d60dd3b522a96b77c4574fe7"} Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.191424 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.395190 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.504600 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-dns-swift-storage-0\") pod \"d024ef08-351c-46f1-a000-8e6803d52572\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.506863 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-config\") pod \"d024ef08-351c-46f1-a000-8e6803d52572\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.506975 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-ovsdbserver-nb\") pod \"d024ef08-351c-46f1-a000-8e6803d52572\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.508608 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk9h8\" (UniqueName: \"kubernetes.io/projected/d024ef08-351c-46f1-a000-8e6803d52572-kube-api-access-sk9h8\") pod \"d024ef08-351c-46f1-a000-8e6803d52572\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.509885 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-ovsdbserver-sb\") pod \"d024ef08-351c-46f1-a000-8e6803d52572\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.510263 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-dns-svc\") pod \"d024ef08-351c-46f1-a000-8e6803d52572\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.516816 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d024ef08-351c-46f1-a000-8e6803d52572-kube-api-access-sk9h8" (OuterVolumeSpecName: "kube-api-access-sk9h8") pod "d024ef08-351c-46f1-a000-8e6803d52572" (UID: "d024ef08-351c-46f1-a000-8e6803d52572"). InnerVolumeSpecName "kube-api-access-sk9h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.598372 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d024ef08-351c-46f1-a000-8e6803d52572" (UID: "d024ef08-351c-46f1-a000-8e6803d52572"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.611025 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-config" (OuterVolumeSpecName: "config") pod "d024ef08-351c-46f1-a000-8e6803d52572" (UID: "d024ef08-351c-46f1-a000-8e6803d52572"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.617271 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d024ef08-351c-46f1-a000-8e6803d52572" (UID: "d024ef08-351c-46f1-a000-8e6803d52572"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.617970 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-ovsdbserver-nb\") pod \"d024ef08-351c-46f1-a000-8e6803d52572\" (UID: \"d024ef08-351c-46f1-a000-8e6803d52572\") " Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.619130 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.619156 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.619222 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk9h8\" (UniqueName: \"kubernetes.io/projected/d024ef08-351c-46f1-a000-8e6803d52572-kube-api-access-sk9h8\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:37 crc kubenswrapper[4678]: W1124 11:41:37.619235 4678 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d024ef08-351c-46f1-a000-8e6803d52572/volumes/kubernetes.io~configmap/ovsdbserver-nb Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.619253 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d024ef08-351c-46f1-a000-8e6803d52572" (UID: "d024ef08-351c-46f1-a000-8e6803d52572"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.633054 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d024ef08-351c-46f1-a000-8e6803d52572" (UID: "d024ef08-351c-46f1-a000-8e6803d52572"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.656914 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d024ef08-351c-46f1-a000-8e6803d52572" (UID: "d024ef08-351c-46f1-a000-8e6803d52572"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.721766 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.721810 4678 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:37 crc kubenswrapper[4678]: I1124 11:41:37.721823 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d024ef08-351c-46f1-a000-8e6803d52572-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:38 crc kubenswrapper[4678]: I1124 11:41:38.018560 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-lsbwn"] Nov 24 11:41:38 crc kubenswrapper[4678]: I1124 11:41:38.168986 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" event={"ID":"d024ef08-351c-46f1-a000-8e6803d52572","Type":"ContainerDied","Data":"3547d333a1b8f8be0b04d2f62eec6c86479a32bfde5577d1745ae72f46479294"} Nov 24 11:41:38 crc kubenswrapper[4678]: I1124 11:41:38.169039 4678 scope.go:117] "RemoveContainer" containerID="ac880874918e97a5bcb7ccd306cc6c3909f3c1d4d60dd3b522a96b77c4574fe7" Nov 24 11:41:38 crc kubenswrapper[4678]: I1124 11:41:38.169215 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-qlcnq" Nov 24 11:41:38 crc kubenswrapper[4678]: I1124 11:41:38.189417 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" event={"ID":"79679ecc-800f-4387-8516-8fb01f65610b","Type":"ContainerStarted","Data":"d5c45ad2f4fd150769a505f35b1168baabeb6097d0608cf71b760c9f4c189382"} Nov 24 11:41:38 crc kubenswrapper[4678]: I1124 11:41:38.244751 4678 scope.go:117] "RemoveContainer" containerID="c59cfe0972a8476e5fcde0aca5f23f90644c9a9799dbd8fe61b53c39632194cb" Nov 24 11:41:38 crc kubenswrapper[4678]: I1124 11:41:38.308493 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-qlcnq"] Nov 24 11:41:38 crc kubenswrapper[4678]: I1124 11:41:38.318771 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-qlcnq"] Nov 24 11:41:38 crc kubenswrapper[4678]: I1124 11:41:38.896340 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:41:38 crc kubenswrapper[4678]: E1124 11:41:38.896640 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:41:39 crc kubenswrapper[4678]: I1124 11:41:39.203652 4678 generic.go:334] "Generic (PLEG): container finished" podID="79679ecc-800f-4387-8516-8fb01f65610b" containerID="33092e4518e7ed91e2a58760ae7492b10acc177c098d174686fe7ac913ec9137" exitCode=0 Nov 24 11:41:39 crc kubenswrapper[4678]: I1124 11:41:39.203746 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" event={"ID":"79679ecc-800f-4387-8516-8fb01f65610b","Type":"ContainerDied","Data":"33092e4518e7ed91e2a58760ae7492b10acc177c098d174686fe7ac913ec9137"} Nov 24 11:41:39 crc kubenswrapper[4678]: I1124 11:41:39.350344 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:41:39 crc kubenswrapper[4678]: I1124 11:41:39.418030 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:41:39 crc kubenswrapper[4678]: E1124 11:41:39.825515 4678 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd024ef08_351c_46f1_a000_8e6803d52572.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd024ef08_351c_46f1_a000_8e6803d52572.slice/crio-ac880874918e97a5bcb7ccd306cc6c3909f3c1d4d60dd3b522a96b77c4574fe7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd024ef08_351c_46f1_a000_8e6803d52572.slice/crio-3547d333a1b8f8be0b04d2f62eec6c86479a32bfde5577d1745ae72f46479294\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94757caa_5918_4bc8_89c0_587e0cafd70c.slice/crio-conmon-9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94757caa_5918_4bc8_89c0_587e0cafd70c.slice/crio-9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd024ef08_351c_46f1_a000_8e6803d52572.slice/crio-conmon-ac880874918e97a5bcb7ccd306cc6c3909f3c1d4d60dd3b522a96b77c4574fe7.scope\": RecentStats: unable to find data in memory cache], [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Nov 24 11:41:39 crc kubenswrapper[4678]: E1124 11:41:39.825656 4678 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94757caa_5918_4bc8_89c0_587e0cafd70c.slice/crio-conmon-9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94757caa_5918_4bc8_89c0_587e0cafd70c.slice/crio-9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb.scope\": RecentStats: unable to find data in memory cache]" Nov 24 11:41:39 crc kubenswrapper[4678]: I1124 11:41:39.971051 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d024ef08-351c-46f1-a000-8e6803d52572" path="/var/lib/kubelet/pods/d024ef08-351c-46f1-a000-8e6803d52572/volumes" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.168011 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t7wls"] Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.219941 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.224321 4678 generic.go:334] "Generic (PLEG): container finished" podID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerID="9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb" exitCode=137 Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.224377 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94757caa-5918-4bc8-89c0-587e0cafd70c","Type":"ContainerDied","Data":"9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb"} Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.224741 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94757caa-5918-4bc8-89c0-587e0cafd70c","Type":"ContainerDied","Data":"93fccd3594e69213afba23877b90ad8ac63274d910634c1471d61717d056542f"} Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.224878 4678 scope.go:117] "RemoveContainer" containerID="ee42929e4c5a9e2365889902090ca7b04f8252cabe7d098abb6b69b4ee771a8b" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.230709 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" event={"ID":"79679ecc-800f-4387-8516-8fb01f65610b","Type":"ContainerStarted","Data":"39293ad19c49e53c6b11604cbd5415518f242b62fbe9ba9ad7de0adc405e8be9"} Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.230760 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.293836 4678 scope.go:117] "RemoveContainer" containerID="c934ed7087f37a052070a27e281cebdc9fab1860d6ded382b7aba605e67bb07a" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.320446 4678 scope.go:117] "RemoveContainer" containerID="97d2ad02d3be40e19c8d32cdb8b0a6e987f6102ef0c0415257ea66575c45a39a" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.357455 4678 scope.go:117] "RemoveContainer" containerID="9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.389709 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-sg-core-conf-yaml\") pod \"94757caa-5918-4bc8-89c0-587e0cafd70c\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.390036 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94757caa-5918-4bc8-89c0-587e0cafd70c-run-httpd\") pod \"94757caa-5918-4bc8-89c0-587e0cafd70c\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.390636 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94757caa-5918-4bc8-89c0-587e0cafd70c-log-httpd\") pod \"94757caa-5918-4bc8-89c0-587e0cafd70c\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.391162 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-config-data\") pod \"94757caa-5918-4bc8-89c0-587e0cafd70c\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.391758 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-scripts\") pod \"94757caa-5918-4bc8-89c0-587e0cafd70c\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.391952 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t2t7\" (UniqueName: \"kubernetes.io/projected/94757caa-5918-4bc8-89c0-587e0cafd70c-kube-api-access-2t2t7\") pod \"94757caa-5918-4bc8-89c0-587e0cafd70c\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.390579 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94757caa-5918-4bc8-89c0-587e0cafd70c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "94757caa-5918-4bc8-89c0-587e0cafd70c" (UID: "94757caa-5918-4bc8-89c0-587e0cafd70c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.391070 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94757caa-5918-4bc8-89c0-587e0cafd70c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "94757caa-5918-4bc8-89c0-587e0cafd70c" (UID: "94757caa-5918-4bc8-89c0-587e0cafd70c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.392630 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-combined-ca-bundle\") pod \"94757caa-5918-4bc8-89c0-587e0cafd70c\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.392955 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-ceilometer-tls-certs\") pod \"94757caa-5918-4bc8-89c0-587e0cafd70c\" (UID: \"94757caa-5918-4bc8-89c0-587e0cafd70c\") " Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.393866 4678 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94757caa-5918-4bc8-89c0-587e0cafd70c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.393943 4678 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94757caa-5918-4bc8-89c0-587e0cafd70c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.395394 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-scripts" (OuterVolumeSpecName: "scripts") pod "94757caa-5918-4bc8-89c0-587e0cafd70c" (UID: "94757caa-5918-4bc8-89c0-587e0cafd70c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.415849 4678 scope.go:117] "RemoveContainer" containerID="ee42929e4c5a9e2365889902090ca7b04f8252cabe7d098abb6b69b4ee771a8b" Nov 24 11:41:40 crc kubenswrapper[4678]: E1124 11:41:40.420869 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee42929e4c5a9e2365889902090ca7b04f8252cabe7d098abb6b69b4ee771a8b\": container with ID starting with ee42929e4c5a9e2365889902090ca7b04f8252cabe7d098abb6b69b4ee771a8b not found: ID does not exist" containerID="ee42929e4c5a9e2365889902090ca7b04f8252cabe7d098abb6b69b4ee771a8b" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.420948 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee42929e4c5a9e2365889902090ca7b04f8252cabe7d098abb6b69b4ee771a8b"} err="failed to get container status \"ee42929e4c5a9e2365889902090ca7b04f8252cabe7d098abb6b69b4ee771a8b\": rpc error: code = NotFound desc = could not find container \"ee42929e4c5a9e2365889902090ca7b04f8252cabe7d098abb6b69b4ee771a8b\": container with ID starting with ee42929e4c5a9e2365889902090ca7b04f8252cabe7d098abb6b69b4ee771a8b not found: ID does not exist" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.420973 4678 scope.go:117] "RemoveContainer" containerID="c934ed7087f37a052070a27e281cebdc9fab1860d6ded382b7aba605e67bb07a" Nov 24 11:41:40 crc kubenswrapper[4678]: E1124 11:41:40.421534 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c934ed7087f37a052070a27e281cebdc9fab1860d6ded382b7aba605e67bb07a\": container with ID starting with c934ed7087f37a052070a27e281cebdc9fab1860d6ded382b7aba605e67bb07a not found: ID does not exist" containerID="c934ed7087f37a052070a27e281cebdc9fab1860d6ded382b7aba605e67bb07a" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.421579 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c934ed7087f37a052070a27e281cebdc9fab1860d6ded382b7aba605e67bb07a"} err="failed to get container status \"c934ed7087f37a052070a27e281cebdc9fab1860d6ded382b7aba605e67bb07a\": rpc error: code = NotFound desc = could not find container \"c934ed7087f37a052070a27e281cebdc9fab1860d6ded382b7aba605e67bb07a\": container with ID starting with c934ed7087f37a052070a27e281cebdc9fab1860d6ded382b7aba605e67bb07a not found: ID does not exist" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.421606 4678 scope.go:117] "RemoveContainer" containerID="97d2ad02d3be40e19c8d32cdb8b0a6e987f6102ef0c0415257ea66575c45a39a" Nov 24 11:41:40 crc kubenswrapper[4678]: E1124 11:41:40.422772 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97d2ad02d3be40e19c8d32cdb8b0a6e987f6102ef0c0415257ea66575c45a39a\": container with ID starting with 97d2ad02d3be40e19c8d32cdb8b0a6e987f6102ef0c0415257ea66575c45a39a not found: ID does not exist" containerID="97d2ad02d3be40e19c8d32cdb8b0a6e987f6102ef0c0415257ea66575c45a39a" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.422795 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97d2ad02d3be40e19c8d32cdb8b0a6e987f6102ef0c0415257ea66575c45a39a"} err="failed to get container status \"97d2ad02d3be40e19c8d32cdb8b0a6e987f6102ef0c0415257ea66575c45a39a\": rpc error: code = NotFound desc = could not find container \"97d2ad02d3be40e19c8d32cdb8b0a6e987f6102ef0c0415257ea66575c45a39a\": container with ID starting with 97d2ad02d3be40e19c8d32cdb8b0a6e987f6102ef0c0415257ea66575c45a39a not found: ID does not exist" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.422815 4678 scope.go:117] "RemoveContainer" containerID="9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb" Nov 24 11:41:40 crc kubenswrapper[4678]: E1124 11:41:40.423211 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb\": container with ID starting with 9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb not found: ID does not exist" containerID="9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.423246 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb"} err="failed to get container status \"9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb\": rpc error: code = NotFound desc = could not find container \"9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb\": container with ID starting with 9457f0925cb1d55af690c967783a334b42c536c416f46fa0179b501139b71fcb not found: ID does not exist" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.424136 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94757caa-5918-4bc8-89c0-587e0cafd70c-kube-api-access-2t2t7" (OuterVolumeSpecName: "kube-api-access-2t2t7") pod "94757caa-5918-4bc8-89c0-587e0cafd70c" (UID: "94757caa-5918-4bc8-89c0-587e0cafd70c"). InnerVolumeSpecName "kube-api-access-2t2t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.453403 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "94757caa-5918-4bc8-89c0-587e0cafd70c" (UID: "94757caa-5918-4bc8-89c0-587e0cafd70c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.490365 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "94757caa-5918-4bc8-89c0-587e0cafd70c" (UID: "94757caa-5918-4bc8-89c0-587e0cafd70c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.507985 4678 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.508028 4678 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.508043 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.508064 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t2t7\" (UniqueName: \"kubernetes.io/projected/94757caa-5918-4bc8-89c0-587e0cafd70c-kube-api-access-2t2t7\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.524609 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-config-data" (OuterVolumeSpecName: "config-data") pod "94757caa-5918-4bc8-89c0-587e0cafd70c" (UID: "94757caa-5918-4bc8-89c0-587e0cafd70c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.529935 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94757caa-5918-4bc8-89c0-587e0cafd70c" (UID: "94757caa-5918-4bc8-89c0-587e0cafd70c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.610282 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.610337 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94757caa-5918-4bc8-89c0-587e0cafd70c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.900453 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:41:40 crc kubenswrapper[4678]: I1124 11:41:40.924563 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" podStartSLOduration=4.924543413 podStartE2EDuration="4.924543413s" podCreationTimestamp="2025-11-24 11:41:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:41:40.276776387 +0000 UTC m=+1511.207836026" watchObservedRunningTime="2025-11-24 11:41:40.924543413 +0000 UTC m=+1511.855603052" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.241183 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.241261 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t7wls" podUID="a785375f-ace8-49dd-be97-c175855a2ecd" containerName="registry-server" containerID="cri-o://af00aa936743955df601791ebff94f5cd5b57707ad340d74d029214e48cfcede" gracePeriod=2 Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.501872 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.522861 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.541354 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:41:41 crc kubenswrapper[4678]: E1124 11:41:41.542757 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="ceilometer-central-agent" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.542782 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="ceilometer-central-agent" Nov 24 11:41:41 crc kubenswrapper[4678]: E1124 11:41:41.542805 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="ceilometer-notification-agent" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.542813 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="ceilometer-notification-agent" Nov 24 11:41:41 crc kubenswrapper[4678]: E1124 11:41:41.542834 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d024ef08-351c-46f1-a000-8e6803d52572" containerName="dnsmasq-dns" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.542842 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="d024ef08-351c-46f1-a000-8e6803d52572" containerName="dnsmasq-dns" Nov 24 11:41:41 crc kubenswrapper[4678]: E1124 11:41:41.542854 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d024ef08-351c-46f1-a000-8e6803d52572" containerName="init" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.542862 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="d024ef08-351c-46f1-a000-8e6803d52572" containerName="init" Nov 24 11:41:41 crc kubenswrapper[4678]: E1124 11:41:41.542882 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="proxy-httpd" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.542889 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="proxy-httpd" Nov 24 11:41:41 crc kubenswrapper[4678]: E1124 11:41:41.542904 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="sg-core" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.542911 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="sg-core" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.543159 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="ceilometer-notification-agent" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.543181 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="sg-core" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.543198 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="d024ef08-351c-46f1-a000-8e6803d52572" containerName="dnsmasq-dns" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.543216 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="ceilometer-central-agent" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.543238 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" containerName="proxy-httpd" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.546298 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.548612 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.548843 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.549836 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.555654 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.743253 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ee2246c-b989-4aa6-9592-c84f9e8252e1-log-httpd\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.743338 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ee2246c-b989-4aa6-9592-c84f9e8252e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.743363 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ee2246c-b989-4aa6-9592-c84f9e8252e1-scripts\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.743475 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ee2246c-b989-4aa6-9592-c84f9e8252e1-run-httpd\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.743505 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ee2246c-b989-4aa6-9592-c84f9e8252e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.743568 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ee2246c-b989-4aa6-9592-c84f9e8252e1-config-data\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.743645 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ee2246c-b989-4aa6-9592-c84f9e8252e1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.743734 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94b8g\" (UniqueName: \"kubernetes.io/projected/6ee2246c-b989-4aa6-9592-c84f9e8252e1-kube-api-access-94b8g\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.762380 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.845540 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ee2246c-b989-4aa6-9592-c84f9e8252e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.845624 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ee2246c-b989-4aa6-9592-c84f9e8252e1-scripts\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.845724 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ee2246c-b989-4aa6-9592-c84f9e8252e1-run-httpd\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.845753 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ee2246c-b989-4aa6-9592-c84f9e8252e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.845812 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ee2246c-b989-4aa6-9592-c84f9e8252e1-config-data\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.845894 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ee2246c-b989-4aa6-9592-c84f9e8252e1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.845970 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94b8g\" (UniqueName: \"kubernetes.io/projected/6ee2246c-b989-4aa6-9592-c84f9e8252e1-kube-api-access-94b8g\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.846074 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ee2246c-b989-4aa6-9592-c84f9e8252e1-log-httpd\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.846750 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ee2246c-b989-4aa6-9592-c84f9e8252e1-log-httpd\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.847361 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ee2246c-b989-4aa6-9592-c84f9e8252e1-run-httpd\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.855223 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ee2246c-b989-4aa6-9592-c84f9e8252e1-config-data\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.863805 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ee2246c-b989-4aa6-9592-c84f9e8252e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.864365 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ee2246c-b989-4aa6-9592-c84f9e8252e1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.864824 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ee2246c-b989-4aa6-9592-c84f9e8252e1-scripts\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.865527 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94b8g\" (UniqueName: \"kubernetes.io/projected/6ee2246c-b989-4aa6-9592-c84f9e8252e1-kube-api-access-94b8g\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.866058 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ee2246c-b989-4aa6-9592-c84f9e8252e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ee2246c-b989-4aa6-9592-c84f9e8252e1\") " pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.872067 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.918531 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94757caa-5918-4bc8-89c0-587e0cafd70c" path="/var/lib/kubelet/pods/94757caa-5918-4bc8-89c0-587e0cafd70c/volumes" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.949605 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a785375f-ace8-49dd-be97-c175855a2ecd-utilities\") pod \"a785375f-ace8-49dd-be97-c175855a2ecd\" (UID: \"a785375f-ace8-49dd-be97-c175855a2ecd\") " Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.949980 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7xvk\" (UniqueName: \"kubernetes.io/projected/a785375f-ace8-49dd-be97-c175855a2ecd-kube-api-access-m7xvk\") pod \"a785375f-ace8-49dd-be97-c175855a2ecd\" (UID: \"a785375f-ace8-49dd-be97-c175855a2ecd\") " Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.950124 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a785375f-ace8-49dd-be97-c175855a2ecd-catalog-content\") pod \"a785375f-ace8-49dd-be97-c175855a2ecd\" (UID: \"a785375f-ace8-49dd-be97-c175855a2ecd\") " Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.958012 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a785375f-ace8-49dd-be97-c175855a2ecd-utilities" (OuterVolumeSpecName: "utilities") pod "a785375f-ace8-49dd-be97-c175855a2ecd" (UID: "a785375f-ace8-49dd-be97-c175855a2ecd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:41:41 crc kubenswrapper[4678]: I1124 11:41:41.960932 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a785375f-ace8-49dd-be97-c175855a2ecd-kube-api-access-m7xvk" (OuterVolumeSpecName: "kube-api-access-m7xvk") pod "a785375f-ace8-49dd-be97-c175855a2ecd" (UID: "a785375f-ace8-49dd-be97-c175855a2ecd"). InnerVolumeSpecName "kube-api-access-m7xvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.053529 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7xvk\" (UniqueName: \"kubernetes.io/projected/a785375f-ace8-49dd-be97-c175855a2ecd-kube-api-access-m7xvk\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.053565 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a785375f-ace8-49dd-be97-c175855a2ecd-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.057745 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a785375f-ace8-49dd-be97-c175855a2ecd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a785375f-ace8-49dd-be97-c175855a2ecd" (UID: "a785375f-ace8-49dd-be97-c175855a2ecd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.155848 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a785375f-ace8-49dd-be97-c175855a2ecd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.255739 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-crx7v" event={"ID":"7e7fab76-c5f4-450f-be9b-d433395cbcf3","Type":"ContainerStarted","Data":"d01b5c489cb384dca56647b6e2d16298a540b316474a27be35c7bf253b578a54"} Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.260343 4678 generic.go:334] "Generic (PLEG): container finished" podID="a785375f-ace8-49dd-be97-c175855a2ecd" containerID="af00aa936743955df601791ebff94f5cd5b57707ad340d74d029214e48cfcede" exitCode=0 Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.260641 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7wls" event={"ID":"a785375f-ace8-49dd-be97-c175855a2ecd","Type":"ContainerDied","Data":"af00aa936743955df601791ebff94f5cd5b57707ad340d74d029214e48cfcede"} Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.260681 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7wls" event={"ID":"a785375f-ace8-49dd-be97-c175855a2ecd","Type":"ContainerDied","Data":"4e42fc6bcdb3cecef49673ca8d1f93b67742f2e944adec8f9e87250e17f774f7"} Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.260701 4678 scope.go:117] "RemoveContainer" containerID="af00aa936743955df601791ebff94f5cd5b57707ad340d74d029214e48cfcede" Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.260790 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7wls" Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.302740 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-crx7v" podStartSLOduration=2.533322414 podStartE2EDuration="40.302716764s" podCreationTimestamp="2025-11-24 11:41:02 +0000 UTC" firstStartedPulling="2025-11-24 11:41:03.317871843 +0000 UTC m=+1474.248931482" lastFinishedPulling="2025-11-24 11:41:41.087266193 +0000 UTC m=+1512.018325832" observedRunningTime="2025-11-24 11:41:42.278055131 +0000 UTC m=+1513.209114770" watchObservedRunningTime="2025-11-24 11:41:42.302716764 +0000 UTC m=+1513.233776403" Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.306092 4678 scope.go:117] "RemoveContainer" containerID="2b2279e3e8fe0e61ebf1e772d681782db6460d6814839115fcd4f836e7929aa3" Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.309847 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t7wls"] Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.320228 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t7wls"] Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.332121 4678 scope.go:117] "RemoveContainer" containerID="510c65502f92c488119279288c09420bf58433b244cac86639a9f2f5bab0c874" Nov 24 11:41:42 crc kubenswrapper[4678]: W1124 11:41:42.383386 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ee2246c_b989_4aa6_9592_c84f9e8252e1.slice/crio-d62f520ab9dc118b0386a5a71a5db34c06c68739d78da1b32ad287bcc2872ef8 WatchSource:0}: Error finding container d62f520ab9dc118b0386a5a71a5db34c06c68739d78da1b32ad287bcc2872ef8: Status 404 returned error can't find the container with id d62f520ab9dc118b0386a5a71a5db34c06c68739d78da1b32ad287bcc2872ef8 Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.384332 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.401063 4678 scope.go:117] "RemoveContainer" containerID="af00aa936743955df601791ebff94f5cd5b57707ad340d74d029214e48cfcede" Nov 24 11:41:42 crc kubenswrapper[4678]: E1124 11:41:42.401771 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af00aa936743955df601791ebff94f5cd5b57707ad340d74d029214e48cfcede\": container with ID starting with af00aa936743955df601791ebff94f5cd5b57707ad340d74d029214e48cfcede not found: ID does not exist" containerID="af00aa936743955df601791ebff94f5cd5b57707ad340d74d029214e48cfcede" Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.401797 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af00aa936743955df601791ebff94f5cd5b57707ad340d74d029214e48cfcede"} err="failed to get container status \"af00aa936743955df601791ebff94f5cd5b57707ad340d74d029214e48cfcede\": rpc error: code = NotFound desc = could not find container \"af00aa936743955df601791ebff94f5cd5b57707ad340d74d029214e48cfcede\": container with ID starting with af00aa936743955df601791ebff94f5cd5b57707ad340d74d029214e48cfcede not found: ID does not exist" Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.401817 4678 scope.go:117] "RemoveContainer" containerID="2b2279e3e8fe0e61ebf1e772d681782db6460d6814839115fcd4f836e7929aa3" Nov 24 11:41:42 crc kubenswrapper[4678]: E1124 11:41:42.402322 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b2279e3e8fe0e61ebf1e772d681782db6460d6814839115fcd4f836e7929aa3\": container with ID starting with 2b2279e3e8fe0e61ebf1e772d681782db6460d6814839115fcd4f836e7929aa3 not found: ID does not exist" containerID="2b2279e3e8fe0e61ebf1e772d681782db6460d6814839115fcd4f836e7929aa3" Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.402351 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b2279e3e8fe0e61ebf1e772d681782db6460d6814839115fcd4f836e7929aa3"} err="failed to get container status \"2b2279e3e8fe0e61ebf1e772d681782db6460d6814839115fcd4f836e7929aa3\": rpc error: code = NotFound desc = could not find container \"2b2279e3e8fe0e61ebf1e772d681782db6460d6814839115fcd4f836e7929aa3\": container with ID starting with 2b2279e3e8fe0e61ebf1e772d681782db6460d6814839115fcd4f836e7929aa3 not found: ID does not exist" Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.402369 4678 scope.go:117] "RemoveContainer" containerID="510c65502f92c488119279288c09420bf58433b244cac86639a9f2f5bab0c874" Nov 24 11:41:42 crc kubenswrapper[4678]: E1124 11:41:42.402964 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"510c65502f92c488119279288c09420bf58433b244cac86639a9f2f5bab0c874\": container with ID starting with 510c65502f92c488119279288c09420bf58433b244cac86639a9f2f5bab0c874 not found: ID does not exist" containerID="510c65502f92c488119279288c09420bf58433b244cac86639a9f2f5bab0c874" Nov 24 11:41:42 crc kubenswrapper[4678]: I1124 11:41:42.402985 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"510c65502f92c488119279288c09420bf58433b244cac86639a9f2f5bab0c874"} err="failed to get container status \"510c65502f92c488119279288c09420bf58433b244cac86639a9f2f5bab0c874\": rpc error: code = NotFound desc = could not find container \"510c65502f92c488119279288c09420bf58433b244cac86639a9f2f5bab0c874\": container with ID starting with 510c65502f92c488119279288c09420bf58433b244cac86639a9f2f5bab0c874 not found: ID does not exist" Nov 24 11:41:43 crc kubenswrapper[4678]: I1124 11:41:43.287263 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ee2246c-b989-4aa6-9592-c84f9e8252e1","Type":"ContainerStarted","Data":"d62f520ab9dc118b0386a5a71a5db34c06c68739d78da1b32ad287bcc2872ef8"} Nov 24 11:41:43 crc kubenswrapper[4678]: I1124 11:41:43.908799 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a785375f-ace8-49dd-be97-c175855a2ecd" path="/var/lib/kubelet/pods/a785375f-ace8-49dd-be97-c175855a2ecd/volumes" Nov 24 11:41:44 crc kubenswrapper[4678]: I1124 11:41:44.300118 4678 generic.go:334] "Generic (PLEG): container finished" podID="7e7fab76-c5f4-450f-be9b-d433395cbcf3" containerID="d01b5c489cb384dca56647b6e2d16298a540b316474a27be35c7bf253b578a54" exitCode=0 Nov 24 11:41:44 crc kubenswrapper[4678]: I1124 11:41:44.300198 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-crx7v" event={"ID":"7e7fab76-c5f4-450f-be9b-d433395cbcf3","Type":"ContainerDied","Data":"d01b5c489cb384dca56647b6e2d16298a540b316474a27be35c7bf253b578a54"} Nov 24 11:41:46 crc kubenswrapper[4678]: I1124 11:41:46.325555 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-crx7v" event={"ID":"7e7fab76-c5f4-450f-be9b-d433395cbcf3","Type":"ContainerDied","Data":"40d41909126cf461d379474ff8fcd031f350b56c52c5401b6d5f876cff7cc23c"} Nov 24 11:41:46 crc kubenswrapper[4678]: I1124 11:41:46.325848 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40d41909126cf461d379474ff8fcd031f350b56c52c5401b6d5f876cff7cc23c" Nov 24 11:41:46 crc kubenswrapper[4678]: I1124 11:41:46.358575 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-crx7v" Nov 24 11:41:46 crc kubenswrapper[4678]: I1124 11:41:46.458572 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7fab76-c5f4-450f-be9b-d433395cbcf3-config-data\") pod \"7e7fab76-c5f4-450f-be9b-d433395cbcf3\" (UID: \"7e7fab76-c5f4-450f-be9b-d433395cbcf3\") " Nov 24 11:41:46 crc kubenswrapper[4678]: I1124 11:41:46.458922 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7fab76-c5f4-450f-be9b-d433395cbcf3-combined-ca-bundle\") pod \"7e7fab76-c5f4-450f-be9b-d433395cbcf3\" (UID: \"7e7fab76-c5f4-450f-be9b-d433395cbcf3\") " Nov 24 11:41:46 crc kubenswrapper[4678]: I1124 11:41:46.459214 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz6mf\" (UniqueName: \"kubernetes.io/projected/7e7fab76-c5f4-450f-be9b-d433395cbcf3-kube-api-access-cz6mf\") pod \"7e7fab76-c5f4-450f-be9b-d433395cbcf3\" (UID: \"7e7fab76-c5f4-450f-be9b-d433395cbcf3\") " Nov 24 11:41:46 crc kubenswrapper[4678]: I1124 11:41:46.463549 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e7fab76-c5f4-450f-be9b-d433395cbcf3-kube-api-access-cz6mf" (OuterVolumeSpecName: "kube-api-access-cz6mf") pod "7e7fab76-c5f4-450f-be9b-d433395cbcf3" (UID: "7e7fab76-c5f4-450f-be9b-d433395cbcf3"). InnerVolumeSpecName "kube-api-access-cz6mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:46 crc kubenswrapper[4678]: I1124 11:41:46.493750 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7fab76-c5f4-450f-be9b-d433395cbcf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e7fab76-c5f4-450f-be9b-d433395cbcf3" (UID: "7e7fab76-c5f4-450f-be9b-d433395cbcf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:41:46 crc kubenswrapper[4678]: I1124 11:41:46.548724 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7fab76-c5f4-450f-be9b-d433395cbcf3-config-data" (OuterVolumeSpecName: "config-data") pod "7e7fab76-c5f4-450f-be9b-d433395cbcf3" (UID: "7e7fab76-c5f4-450f-be9b-d433395cbcf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:41:46 crc kubenswrapper[4678]: I1124 11:41:46.563052 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz6mf\" (UniqueName: \"kubernetes.io/projected/7e7fab76-c5f4-450f-be9b-d433395cbcf3-kube-api-access-cz6mf\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:46 crc kubenswrapper[4678]: I1124 11:41:46.563096 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7fab76-c5f4-450f-be9b-d433395cbcf3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:46 crc kubenswrapper[4678]: I1124 11:41:46.563112 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7fab76-c5f4-450f-be9b-d433395cbcf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:47 crc kubenswrapper[4678]: I1124 11:41:47.194814 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bb85b8995-lsbwn" Nov 24 11:41:47 crc kubenswrapper[4678]: I1124 11:41:47.285114 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-97gfc"] Nov 24 11:41:47 crc kubenswrapper[4678]: I1124 11:41:47.285435 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68df85789f-97gfc" podUID="f0343680-7657-4ef3-b7aa-3d56d1f4090f" containerName="dnsmasq-dns" containerID="cri-o://01b99ea84a30295e685723e5fb6d9965b87751ea45c50cc4117faee5f1230cb7" gracePeriod=10 Nov 24 11:41:47 crc kubenswrapper[4678]: I1124 11:41:47.396513 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-crx7v" Nov 24 11:41:47 crc kubenswrapper[4678]: I1124 11:41:47.400006 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ee2246c-b989-4aa6-9592-c84f9e8252e1","Type":"ContainerStarted","Data":"19510309b5f065ae7c4e022398b556697553e5e41c8f21c21e6c0b6d4ddcf857"} Nov 24 11:41:47 crc kubenswrapper[4678]: I1124 11:41:47.400070 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ee2246c-b989-4aa6-9592-c84f9e8252e1","Type":"ContainerStarted","Data":"ac3e806f3b8364e12850067da25d9694c231dd9af28a02f4e9185fe16dfd3e04"} Nov 24 11:41:47 crc kubenswrapper[4678]: I1124 11:41:47.992118 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.107321 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-config\") pod \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.107464 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-dns-swift-storage-0\") pod \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.107508 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-ovsdbserver-sb\") pod \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.107633 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-dns-svc\") pod \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.107709 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-openstack-edpm-ipam\") pod \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.107768 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-ovsdbserver-nb\") pod \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.107797 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwvth\" (UniqueName: \"kubernetes.io/projected/f0343680-7657-4ef3-b7aa-3d56d1f4090f-kube-api-access-xwvth\") pod \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.118862 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0343680-7657-4ef3-b7aa-3d56d1f4090f-kube-api-access-xwvth" (OuterVolumeSpecName: "kube-api-access-xwvth") pod "f0343680-7657-4ef3-b7aa-3d56d1f4090f" (UID: "f0343680-7657-4ef3-b7aa-3d56d1f4090f"). InnerVolumeSpecName "kube-api-access-xwvth". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.191476 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f0343680-7657-4ef3-b7aa-3d56d1f4090f" (UID: "f0343680-7657-4ef3-b7aa-3d56d1f4090f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.211233 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.211275 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwvth\" (UniqueName: \"kubernetes.io/projected/f0343680-7657-4ef3-b7aa-3d56d1f4090f-kube-api-access-xwvth\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.212305 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-config" (OuterVolumeSpecName: "config") pod "f0343680-7657-4ef3-b7aa-3d56d1f4090f" (UID: "f0343680-7657-4ef3-b7aa-3d56d1f4090f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.219515 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f0343680-7657-4ef3-b7aa-3d56d1f4090f" (UID: "f0343680-7657-4ef3-b7aa-3d56d1f4090f"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.251436 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f0343680-7657-4ef3-b7aa-3d56d1f4090f" (UID: "f0343680-7657-4ef3-b7aa-3d56d1f4090f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:48 crc kubenswrapper[4678]: E1124 11:41:48.258714 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-ovsdbserver-sb podName:f0343680-7657-4ef3-b7aa-3d56d1f4090f nodeName:}" failed. No retries permitted until 2025-11-24 11:41:48.758685001 +0000 UTC m=+1519.689744640 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-sb" (UniqueName: "kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-ovsdbserver-sb") pod "f0343680-7657-4ef3-b7aa-3d56d1f4090f" (UID: "f0343680-7657-4ef3-b7aa-3d56d1f4090f") : error deleting /var/lib/kubelet/pods/f0343680-7657-4ef3-b7aa-3d56d1f4090f/volume-subpaths: remove /var/lib/kubelet/pods/f0343680-7657-4ef3-b7aa-3d56d1f4090f/volume-subpaths: no such file or directory Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.258974 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0343680-7657-4ef3-b7aa-3d56d1f4090f" (UID: "f0343680-7657-4ef3-b7aa-3d56d1f4090f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.314915 4678 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-config\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.315191 4678 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.315274 4678 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.315347 4678 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.418628 4678 generic.go:334] "Generic (PLEG): container finished" podID="f0343680-7657-4ef3-b7aa-3d56d1f4090f" containerID="01b99ea84a30295e685723e5fb6d9965b87751ea45c50cc4117faee5f1230cb7" exitCode=0 Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.418686 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-97gfc" event={"ID":"f0343680-7657-4ef3-b7aa-3d56d1f4090f","Type":"ContainerDied","Data":"01b99ea84a30295e685723e5fb6d9965b87751ea45c50cc4117faee5f1230cb7"} Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.419010 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-97gfc" event={"ID":"f0343680-7657-4ef3-b7aa-3d56d1f4090f","Type":"ContainerDied","Data":"4c3264406c81ef0389c7bfd334b9076434b7048efd5e8d35f9dcae0d3e4e8c66"} Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.419028 4678 scope.go:117] "RemoveContainer" containerID="01b99ea84a30295e685723e5fb6d9965b87751ea45c50cc4117faee5f1230cb7" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.418714 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-97gfc" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.421985 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ee2246c-b989-4aa6-9592-c84f9e8252e1","Type":"ContainerStarted","Data":"8c2e256b42d980a8b07dcfddc2af40235c7c3f048d84b4be721743a99a81faf7"} Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.444734 4678 scope.go:117] "RemoveContainer" containerID="51df9133d6e723704e2afaf1503809fac91dc80180410c0663601eebb624464a" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.467823 4678 scope.go:117] "RemoveContainer" containerID="01b99ea84a30295e685723e5fb6d9965b87751ea45c50cc4117faee5f1230cb7" Nov 24 11:41:48 crc kubenswrapper[4678]: E1124 11:41:48.468356 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01b99ea84a30295e685723e5fb6d9965b87751ea45c50cc4117faee5f1230cb7\": container with ID starting with 01b99ea84a30295e685723e5fb6d9965b87751ea45c50cc4117faee5f1230cb7 not found: ID does not exist" containerID="01b99ea84a30295e685723e5fb6d9965b87751ea45c50cc4117faee5f1230cb7" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.468399 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b99ea84a30295e685723e5fb6d9965b87751ea45c50cc4117faee5f1230cb7"} err="failed to get container status \"01b99ea84a30295e685723e5fb6d9965b87751ea45c50cc4117faee5f1230cb7\": rpc error: code = NotFound desc = could not find container \"01b99ea84a30295e685723e5fb6d9965b87751ea45c50cc4117faee5f1230cb7\": container with ID starting with 01b99ea84a30295e685723e5fb6d9965b87751ea45c50cc4117faee5f1230cb7 not found: ID does not exist" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.468426 4678 scope.go:117] "RemoveContainer" containerID="51df9133d6e723704e2afaf1503809fac91dc80180410c0663601eebb624464a" Nov 24 11:41:48 crc kubenswrapper[4678]: E1124 11:41:48.468744 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51df9133d6e723704e2afaf1503809fac91dc80180410c0663601eebb624464a\": container with ID starting with 51df9133d6e723704e2afaf1503809fac91dc80180410c0663601eebb624464a not found: ID does not exist" containerID="51df9133d6e723704e2afaf1503809fac91dc80180410c0663601eebb624464a" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.468775 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51df9133d6e723704e2afaf1503809fac91dc80180410c0663601eebb624464a"} err="failed to get container status \"51df9133d6e723704e2afaf1503809fac91dc80180410c0663601eebb624464a\": rpc error: code = NotFound desc = could not find container \"51df9133d6e723704e2afaf1503809fac91dc80180410c0663601eebb624464a\": container with ID starting with 51df9133d6e723704e2afaf1503809fac91dc80180410c0663601eebb624464a not found: ID does not exist" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.827090 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-ovsdbserver-sb\") pod \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\" (UID: \"f0343680-7657-4ef3-b7aa-3d56d1f4090f\") " Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.827561 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0343680-7657-4ef3-b7aa-3d56d1f4090f" (UID: "f0343680-7657-4ef3-b7aa-3d56d1f4090f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.828029 4678 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0343680-7657-4ef3-b7aa-3d56d1f4090f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.834016 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-666c8594cc-27c89"] Nov 24 11:41:48 crc kubenswrapper[4678]: E1124 11:41:48.834550 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a785375f-ace8-49dd-be97-c175855a2ecd" containerName="registry-server" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.834562 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="a785375f-ace8-49dd-be97-c175855a2ecd" containerName="registry-server" Nov 24 11:41:48 crc kubenswrapper[4678]: E1124 11:41:48.834598 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0343680-7657-4ef3-b7aa-3d56d1f4090f" containerName="dnsmasq-dns" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.834603 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0343680-7657-4ef3-b7aa-3d56d1f4090f" containerName="dnsmasq-dns" Nov 24 11:41:48 crc kubenswrapper[4678]: E1124 11:41:48.834618 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a785375f-ace8-49dd-be97-c175855a2ecd" containerName="extract-content" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.834624 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="a785375f-ace8-49dd-be97-c175855a2ecd" containerName="extract-content" Nov 24 11:41:48 crc kubenswrapper[4678]: E1124 11:41:48.834648 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0343680-7657-4ef3-b7aa-3d56d1f4090f" containerName="init" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.834654 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0343680-7657-4ef3-b7aa-3d56d1f4090f" containerName="init" Nov 24 11:41:48 crc kubenswrapper[4678]: E1124 11:41:48.834680 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7fab76-c5f4-450f-be9b-d433395cbcf3" containerName="heat-db-sync" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.834686 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7fab76-c5f4-450f-be9b-d433395cbcf3" containerName="heat-db-sync" Nov 24 11:41:48 crc kubenswrapper[4678]: E1124 11:41:48.834696 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a785375f-ace8-49dd-be97-c175855a2ecd" containerName="extract-utilities" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.834702 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="a785375f-ace8-49dd-be97-c175855a2ecd" containerName="extract-utilities" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.834942 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="a785375f-ace8-49dd-be97-c175855a2ecd" containerName="registry-server" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.834955 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0343680-7657-4ef3-b7aa-3d56d1f4090f" containerName="dnsmasq-dns" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.834965 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7fab76-c5f4-450f-be9b-d433395cbcf3" containerName="heat-db-sync" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.835811 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-666c8594cc-27c89" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.851052 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-666c8594cc-27c89"] Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.929743 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b75b7f8-46a4-423a-bd0f-910b078e32ed-config-data-custom\") pod \"heat-engine-666c8594cc-27c89\" (UID: \"6b75b7f8-46a4-423a-bd0f-910b078e32ed\") " pod="openstack/heat-engine-666c8594cc-27c89" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.929895 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b75b7f8-46a4-423a-bd0f-910b078e32ed-config-data\") pod \"heat-engine-666c8594cc-27c89\" (UID: \"6b75b7f8-46a4-423a-bd0f-910b078e32ed\") " pod="openstack/heat-engine-666c8594cc-27c89" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.929948 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmhmg\" (UniqueName: \"kubernetes.io/projected/6b75b7f8-46a4-423a-bd0f-910b078e32ed-kube-api-access-lmhmg\") pod \"heat-engine-666c8594cc-27c89\" (UID: \"6b75b7f8-46a4-423a-bd0f-910b078e32ed\") " pod="openstack/heat-engine-666c8594cc-27c89" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.929983 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b75b7f8-46a4-423a-bd0f-910b078e32ed-combined-ca-bundle\") pod \"heat-engine-666c8594cc-27c89\" (UID: \"6b75b7f8-46a4-423a-bd0f-910b078e32ed\") " pod="openstack/heat-engine-666c8594cc-27c89" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.938398 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5cc4ff9998-ks46b"] Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.945960 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:48 crc kubenswrapper[4678]: I1124 11:41:48.981360 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5cc4ff9998-ks46b"] Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.031973 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b75b7f8-46a4-423a-bd0f-910b078e32ed-combined-ca-bundle\") pod \"heat-engine-666c8594cc-27c89\" (UID: \"6b75b7f8-46a4-423a-bd0f-910b078e32ed\") " pod="openstack/heat-engine-666c8594cc-27c89" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.032048 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b75b7f8-46a4-423a-bd0f-910b078e32ed-config-data-custom\") pod \"heat-engine-666c8594cc-27c89\" (UID: \"6b75b7f8-46a4-423a-bd0f-910b078e32ed\") " pod="openstack/heat-engine-666c8594cc-27c89" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.032187 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b75b7f8-46a4-423a-bd0f-910b078e32ed-config-data\") pod \"heat-engine-666c8594cc-27c89\" (UID: \"6b75b7f8-46a4-423a-bd0f-910b078e32ed\") " pod="openstack/heat-engine-666c8594cc-27c89" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.032238 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmhmg\" (UniqueName: \"kubernetes.io/projected/6b75b7f8-46a4-423a-bd0f-910b078e32ed-kube-api-access-lmhmg\") pod \"heat-engine-666c8594cc-27c89\" (UID: \"6b75b7f8-46a4-423a-bd0f-910b078e32ed\") " pod="openstack/heat-engine-666c8594cc-27c89" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.045746 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b75b7f8-46a4-423a-bd0f-910b078e32ed-config-data\") pod \"heat-engine-666c8594cc-27c89\" (UID: \"6b75b7f8-46a4-423a-bd0f-910b078e32ed\") " pod="openstack/heat-engine-666c8594cc-27c89" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.048881 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b75b7f8-46a4-423a-bd0f-910b078e32ed-config-data-custom\") pod \"heat-engine-666c8594cc-27c89\" (UID: \"6b75b7f8-46a4-423a-bd0f-910b078e32ed\") " pod="openstack/heat-engine-666c8594cc-27c89" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.055939 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b75b7f8-46a4-423a-bd0f-910b078e32ed-combined-ca-bundle\") pod \"heat-engine-666c8594cc-27c89\" (UID: \"6b75b7f8-46a4-423a-bd0f-910b078e32ed\") " pod="openstack/heat-engine-666c8594cc-27c89" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.079753 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-84b6779dd-5vgzv"] Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.081826 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.088434 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmhmg\" (UniqueName: \"kubernetes.io/projected/6b75b7f8-46a4-423a-bd0f-910b078e32ed-kube-api-access-lmhmg\") pod \"heat-engine-666c8594cc-27c89\" (UID: \"6b75b7f8-46a4-423a-bd0f-910b078e32ed\") " pod="openstack/heat-engine-666c8594cc-27c89" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.107008 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84b6779dd-5vgzv"] Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.134812 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-internal-tls-certs\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.134873 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-config-data-custom\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.134891 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-config-data\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.134926 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-combined-ca-bundle\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.135001 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-public-tls-certs\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.135034 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j75p\" (UniqueName: \"kubernetes.io/projected/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-kube-api-access-7j75p\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.140763 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-97gfc"] Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.201552 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-666c8594cc-27c89" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.270282 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw296\" (UniqueName: \"kubernetes.io/projected/97d6d2c5-9baf-480a-b82b-d283121c72d3-kube-api-access-kw296\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.270742 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d6d2c5-9baf-480a-b82b-d283121c72d3-public-tls-certs\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.270785 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-public-tls-certs\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.270833 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d6d2c5-9baf-480a-b82b-d283121c72d3-internal-tls-certs\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.270878 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j75p\" (UniqueName: \"kubernetes.io/projected/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-kube-api-access-7j75p\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.270904 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97d6d2c5-9baf-480a-b82b-d283121c72d3-config-data-custom\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.271090 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d6d2c5-9baf-480a-b82b-d283121c72d3-config-data\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.271146 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d6d2c5-9baf-480a-b82b-d283121c72d3-combined-ca-bundle\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.271263 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-internal-tls-certs\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.271342 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-config-data-custom\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.271367 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-config-data\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.271446 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-combined-ca-bundle\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.285451 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-public-tls-certs\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.288392 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-internal-tls-certs\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.290725 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-97gfc"] Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.292204 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-combined-ca-bundle\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.293585 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-config-data\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.296852 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j75p\" (UniqueName: \"kubernetes.io/projected/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-kube-api-access-7j75p\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.298459 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f258680a-b33d-4eec-8fce-3f6f5d3a00ee-config-data-custom\") pod \"heat-api-5cc4ff9998-ks46b\" (UID: \"f258680a-b33d-4eec-8fce-3f6f5d3a00ee\") " pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.374342 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d6d2c5-9baf-480a-b82b-d283121c72d3-config-data\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.374410 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d6d2c5-9baf-480a-b82b-d283121c72d3-combined-ca-bundle\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.374570 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw296\" (UniqueName: \"kubernetes.io/projected/97d6d2c5-9baf-480a-b82b-d283121c72d3-kube-api-access-kw296\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.374628 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d6d2c5-9baf-480a-b82b-d283121c72d3-public-tls-certs\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.374663 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d6d2c5-9baf-480a-b82b-d283121c72d3-internal-tls-certs\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.374711 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97d6d2c5-9baf-480a-b82b-d283121c72d3-config-data-custom\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.381903 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97d6d2c5-9baf-480a-b82b-d283121c72d3-config-data-custom\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.385034 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d6d2c5-9baf-480a-b82b-d283121c72d3-config-data\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.385778 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d6d2c5-9baf-480a-b82b-d283121c72d3-combined-ca-bundle\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.394608 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d6d2c5-9baf-480a-b82b-d283121c72d3-internal-tls-certs\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.394832 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d6d2c5-9baf-480a-b82b-d283121c72d3-public-tls-certs\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.400867 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw296\" (UniqueName: \"kubernetes.io/projected/97d6d2c5-9baf-480a-b82b-d283121c72d3-kube-api-access-kw296\") pod \"heat-cfnapi-84b6779dd-5vgzv\" (UID: \"97d6d2c5-9baf-480a-b82b-d283121c72d3\") " pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.499558 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.584752 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.838622 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-666c8594cc-27c89"] Nov 24 11:41:49 crc kubenswrapper[4678]: I1124 11:41:49.922084 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0343680-7657-4ef3-b7aa-3d56d1f4090f" path="/var/lib/kubelet/pods/f0343680-7657-4ef3-b7aa-3d56d1f4090f/volumes" Nov 24 11:41:50 crc kubenswrapper[4678]: I1124 11:41:50.009466 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84b6779dd-5vgzv"] Nov 24 11:41:50 crc kubenswrapper[4678]: I1124 11:41:50.229243 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5cc4ff9998-ks46b"] Nov 24 11:41:50 crc kubenswrapper[4678]: W1124 11:41:50.235034 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf258680a_b33d_4eec_8fce_3f6f5d3a00ee.slice/crio-050fcc6e06fb8535a1650df287563a18c2520c67299ff8a94405f58e6ed7019a WatchSource:0}: Error finding container 050fcc6e06fb8535a1650df287563a18c2520c67299ff8a94405f58e6ed7019a: Status 404 returned error can't find the container with id 050fcc6e06fb8535a1650df287563a18c2520c67299ff8a94405f58e6ed7019a Nov 24 11:41:50 crc kubenswrapper[4678]: I1124 11:41:50.470976 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cc4ff9998-ks46b" event={"ID":"f258680a-b33d-4eec-8fce-3f6f5d3a00ee","Type":"ContainerStarted","Data":"050fcc6e06fb8535a1650df287563a18c2520c67299ff8a94405f58e6ed7019a"} Nov 24 11:41:50 crc kubenswrapper[4678]: I1124 11:41:50.473133 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ee2246c-b989-4aa6-9592-c84f9e8252e1","Type":"ContainerStarted","Data":"dcb72ce8f23cd3eb954a22869062683aa4ffb0635d996e4a2788b6c6dea4c969"} Nov 24 11:41:50 crc kubenswrapper[4678]: I1124 11:41:50.473414 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 11:41:50 crc kubenswrapper[4678]: I1124 11:41:50.474600 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84b6779dd-5vgzv" event={"ID":"97d6d2c5-9baf-480a-b82b-d283121c72d3","Type":"ContainerStarted","Data":"ca9bef20c1300b02df0beb9db984fcad011e28376c5b230ab42be26aa6744740"} Nov 24 11:41:50 crc kubenswrapper[4678]: I1124 11:41:50.476985 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-666c8594cc-27c89" event={"ID":"6b75b7f8-46a4-423a-bd0f-910b078e32ed","Type":"ContainerStarted","Data":"85b3e46e76e13aa72ad88d08b16410f58826dd9a4f577e85f4bc392bd3f5db84"} Nov 24 11:41:50 crc kubenswrapper[4678]: I1124 11:41:50.477013 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-666c8594cc-27c89" event={"ID":"6b75b7f8-46a4-423a-bd0f-910b078e32ed","Type":"ContainerStarted","Data":"b72063c833991e1534664f569723607b826f7b39bc1c04ea81924cf057990b68"} Nov 24 11:41:50 crc kubenswrapper[4678]: I1124 11:41:50.477952 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-666c8594cc-27c89" Nov 24 11:41:50 crc kubenswrapper[4678]: I1124 11:41:50.513068 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.394121703 podStartE2EDuration="9.513046622s" podCreationTimestamp="2025-11-24 11:41:41 +0000 UTC" firstStartedPulling="2025-11-24 11:41:42.401163047 +0000 UTC m=+1513.332222686" lastFinishedPulling="2025-11-24 11:41:49.520087956 +0000 UTC m=+1520.451147605" observedRunningTime="2025-11-24 11:41:50.495806258 +0000 UTC m=+1521.426865917" watchObservedRunningTime="2025-11-24 11:41:50.513046622 +0000 UTC m=+1521.444106261" Nov 24 11:41:50 crc kubenswrapper[4678]: I1124 11:41:50.529275 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-666c8594cc-27c89" podStartSLOduration=2.529256866 podStartE2EDuration="2.529256866s" podCreationTimestamp="2025-11-24 11:41:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:41:50.51447572 +0000 UTC m=+1521.445535359" watchObservedRunningTime="2025-11-24 11:41:50.529256866 +0000 UTC m=+1521.460316505" Nov 24 11:41:52 crc kubenswrapper[4678]: I1124 11:41:52.898447 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:41:52 crc kubenswrapper[4678]: E1124 11:41:52.899592 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:41:53 crc kubenswrapper[4678]: I1124 11:41:53.528609 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cc4ff9998-ks46b" event={"ID":"f258680a-b33d-4eec-8fce-3f6f5d3a00ee","Type":"ContainerStarted","Data":"8b619368469a63cecd790ce2d2aeb0d42da3e06a6fee6899c9891de41727a7ca"} Nov 24 11:41:53 crc kubenswrapper[4678]: I1124 11:41:53.528889 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:41:53 crc kubenswrapper[4678]: I1124 11:41:53.530662 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84b6779dd-5vgzv" event={"ID":"97d6d2c5-9baf-480a-b82b-d283121c72d3","Type":"ContainerStarted","Data":"74a8cd3fb2fdfa645c4f01d614fdb64bd55620ac2032baa8876674ff6bd8c75c"} Nov 24 11:41:53 crc kubenswrapper[4678]: I1124 11:41:53.530849 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:41:53 crc kubenswrapper[4678]: I1124 11:41:53.554271 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5cc4ff9998-ks46b" podStartSLOduration=3.459297824 podStartE2EDuration="5.554252513s" podCreationTimestamp="2025-11-24 11:41:48 +0000 UTC" firstStartedPulling="2025-11-24 11:41:50.237920103 +0000 UTC m=+1521.168979742" lastFinishedPulling="2025-11-24 11:41:52.332874792 +0000 UTC m=+1523.263934431" observedRunningTime="2025-11-24 11:41:53.545515348 +0000 UTC m=+1524.476574987" watchObservedRunningTime="2025-11-24 11:41:53.554252513 +0000 UTC m=+1524.485312152" Nov 24 11:41:53 crc kubenswrapper[4678]: I1124 11:41:53.577532 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-84b6779dd-5vgzv" podStartSLOduration=3.26632803 podStartE2EDuration="5.577510307s" podCreationTimestamp="2025-11-24 11:41:48 +0000 UTC" firstStartedPulling="2025-11-24 11:41:50.021273004 +0000 UTC m=+1520.952332643" lastFinishedPulling="2025-11-24 11:41:52.332455281 +0000 UTC m=+1523.263514920" observedRunningTime="2025-11-24 11:41:53.573277393 +0000 UTC m=+1524.504337032" watchObservedRunningTime="2025-11-24 11:41:53.577510307 +0000 UTC m=+1524.508569956" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.060808 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t"] Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.063197 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.070412 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.070627 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.070877 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.071088 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.101725 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t"] Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.221657 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t\" (UID: \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.221819 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t\" (UID: \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.221857 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7tcc\" (UniqueName: \"kubernetes.io/projected/477ad805-b800-4cb5-b0ae-9fb064cc09ee-kube-api-access-k7tcc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t\" (UID: \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.221890 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t\" (UID: \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.325482 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t\" (UID: \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.325652 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t\" (UID: \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.326610 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7tcc\" (UniqueName: \"kubernetes.io/projected/477ad805-b800-4cb5-b0ae-9fb064cc09ee-kube-api-access-k7tcc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t\" (UID: \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.326660 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t\" (UID: \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.336332 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t\" (UID: \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.345507 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t\" (UID: \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.346702 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t\" (UID: \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.368506 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7tcc\" (UniqueName: \"kubernetes.io/projected/477ad805-b800-4cb5-b0ae-9fb064cc09ee-kube-api-access-k7tcc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t\" (UID: \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.432514 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.930265 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-84b6779dd-5vgzv" Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.998486 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-58b5bdcfc5-zwlfb"] Nov 24 11:42:01 crc kubenswrapper[4678]: I1124 11:42:01.998745 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" podUID="ab7fd19c-25f8-400e-b98a-e5dd65e113ac" containerName="heat-cfnapi" containerID="cri-o://4a8413214958702d07e05b1d3adbf724e5f7fa558cb7975892d3c43f6cacce03" gracePeriod=60 Nov 24 11:42:02 crc kubenswrapper[4678]: I1124 11:42:02.021066 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5cc4ff9998-ks46b" Nov 24 11:42:02 crc kubenswrapper[4678]: I1124 11:42:02.103427 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6d5bd86fdc-h8dll"] Nov 24 11:42:02 crc kubenswrapper[4678]: I1124 11:42:02.103767 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6d5bd86fdc-h8dll" podUID="a72aa4c3-72f4-473d-bf8f-a16b6d456add" containerName="heat-api" containerID="cri-o://70127dabb05c80deebbf61255958491260ab9ab73ce1030ea5f8b33914502887" gracePeriod=60 Nov 24 11:42:02 crc kubenswrapper[4678]: I1124 11:42:02.420067 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t"] Nov 24 11:42:02 crc kubenswrapper[4678]: I1124 11:42:02.657292 4678 generic.go:334] "Generic (PLEG): container finished" podID="87e447ce-94b3-4e59-a513-fec289651bd6" containerID="12f98ab6b8aa344c386213c48f9adf332a13db5a8010f8f98e911e4b6afa7031" exitCode=0 Nov 24 11:42:02 crc kubenswrapper[4678]: I1124 11:42:02.657717 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"87e447ce-94b3-4e59-a513-fec289651bd6","Type":"ContainerDied","Data":"12f98ab6b8aa344c386213c48f9adf332a13db5a8010f8f98e911e4b6afa7031"} Nov 24 11:42:02 crc kubenswrapper[4678]: I1124 11:42:02.661803 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" event={"ID":"477ad805-b800-4cb5-b0ae-9fb064cc09ee","Type":"ContainerStarted","Data":"355b8ea604ed9b45cd4326cb26280ebbfdcef8be9cec728385bea66dadc4c03a"} Nov 24 11:42:03 crc kubenswrapper[4678]: I1124 11:42:03.680863 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"87e447ce-94b3-4e59-a513-fec289651bd6","Type":"ContainerStarted","Data":"7ef7d257062910cc5e561589daa561691075f2f51cdc3f3908ebdc5b13653cbe"} Nov 24 11:42:03 crc kubenswrapper[4678]: I1124 11:42:03.683168 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 11:42:03 crc kubenswrapper[4678]: I1124 11:42:03.712640 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=43.712549302 podStartE2EDuration="43.712549302s" podCreationTimestamp="2025-11-24 11:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:42:03.704629399 +0000 UTC m=+1534.635689068" watchObservedRunningTime="2025-11-24 11:42:03.712549302 +0000 UTC m=+1534.643608941" Nov 24 11:42:04 crc kubenswrapper[4678]: I1124 11:42:04.703661 4678 generic.go:334] "Generic (PLEG): container finished" podID="2b3ff76d-79e0-4f90-8b4a-7763c3ca8167" containerID="c359de690a8026a365d4ab0ce340f63f3319e716081723ce9032acf33912ca0f" exitCode=0 Nov 24 11:42:04 crc kubenswrapper[4678]: I1124 11:42:04.705573 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167","Type":"ContainerDied","Data":"c359de690a8026a365d4ab0ce340f63f3319e716081723ce9032acf33912ca0f"} Nov 24 11:42:04 crc kubenswrapper[4678]: I1124 11:42:04.896043 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:42:04 crc kubenswrapper[4678]: E1124 11:42:04.896362 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:42:05 crc kubenswrapper[4678]: I1124 11:42:05.494944 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6d5bd86fdc-h8dll" podUID="a72aa4c3-72f4-473d-bf8f-a16b6d456add" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.224:8004/healthcheck\": read tcp 10.217.0.2:39392->10.217.0.224:8004: read: connection reset by peer" Nov 24 11:42:05 crc kubenswrapper[4678]: I1124 11:42:05.563158 4678 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" podUID="ab7fd19c-25f8-400e-b98a-e5dd65e113ac" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.225:8000/healthcheck\": read tcp 10.217.0.2:53844->10.217.0.225:8000: read: connection reset by peer" Nov 24 11:42:05 crc kubenswrapper[4678]: I1124 11:42:05.718810 4678 generic.go:334] "Generic (PLEG): container finished" podID="a72aa4c3-72f4-473d-bf8f-a16b6d456add" containerID="70127dabb05c80deebbf61255958491260ab9ab73ce1030ea5f8b33914502887" exitCode=0 Nov 24 11:42:05 crc kubenswrapper[4678]: I1124 11:42:05.718873 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d5bd86fdc-h8dll" event={"ID":"a72aa4c3-72f4-473d-bf8f-a16b6d456add","Type":"ContainerDied","Data":"70127dabb05c80deebbf61255958491260ab9ab73ce1030ea5f8b33914502887"} Nov 24 11:42:05 crc kubenswrapper[4678]: I1124 11:42:05.720643 4678 generic.go:334] "Generic (PLEG): container finished" podID="ab7fd19c-25f8-400e-b98a-e5dd65e113ac" containerID="4a8413214958702d07e05b1d3adbf724e5f7fa558cb7975892d3c43f6cacce03" exitCode=0 Nov 24 11:42:05 crc kubenswrapper[4678]: I1124 11:42:05.720690 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" event={"ID":"ab7fd19c-25f8-400e-b98a-e5dd65e113ac","Type":"ContainerDied","Data":"4a8413214958702d07e05b1d3adbf724e5f7fa558cb7975892d3c43f6cacce03"} Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.111626 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.120241 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.192186 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-public-tls-certs\") pod \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.192242 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-internal-tls-certs\") pod \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.192273 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-config-data\") pod \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.192312 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-config-data-custom\") pod \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.192336 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-combined-ca-bundle\") pod \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.192359 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-config-data-custom\") pod \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.192434 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-internal-tls-certs\") pod \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.192450 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-public-tls-certs\") pod \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.192468 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phgw6\" (UniqueName: \"kubernetes.io/projected/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-kube-api-access-phgw6\") pod \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\" (UID: \"ab7fd19c-25f8-400e-b98a-e5dd65e113ac\") " Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.192491 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbdtt\" (UniqueName: \"kubernetes.io/projected/a72aa4c3-72f4-473d-bf8f-a16b6d456add-kube-api-access-sbdtt\") pod \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.192521 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-combined-ca-bundle\") pod \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.192547 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-config-data\") pod \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\" (UID: \"a72aa4c3-72f4-473d-bf8f-a16b6d456add\") " Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.198400 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a72aa4c3-72f4-473d-bf8f-a16b6d456add-kube-api-access-sbdtt" (OuterVolumeSpecName: "kube-api-access-sbdtt") pod "a72aa4c3-72f4-473d-bf8f-a16b6d456add" (UID: "a72aa4c3-72f4-473d-bf8f-a16b6d456add"). InnerVolumeSpecName "kube-api-access-sbdtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.199515 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-kube-api-access-phgw6" (OuterVolumeSpecName: "kube-api-access-phgw6") pod "ab7fd19c-25f8-400e-b98a-e5dd65e113ac" (UID: "ab7fd19c-25f8-400e-b98a-e5dd65e113ac"). InnerVolumeSpecName "kube-api-access-phgw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.200747 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a72aa4c3-72f4-473d-bf8f-a16b6d456add" (UID: "a72aa4c3-72f4-473d-bf8f-a16b6d456add"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.203481 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ab7fd19c-25f8-400e-b98a-e5dd65e113ac" (UID: "ab7fd19c-25f8-400e-b98a-e5dd65e113ac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.243627 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab7fd19c-25f8-400e-b98a-e5dd65e113ac" (UID: "ab7fd19c-25f8-400e-b98a-e5dd65e113ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.247016 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a72aa4c3-72f4-473d-bf8f-a16b6d456add" (UID: "a72aa4c3-72f4-473d-bf8f-a16b6d456add"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.279659 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a72aa4c3-72f4-473d-bf8f-a16b6d456add" (UID: "a72aa4c3-72f4-473d-bf8f-a16b6d456add"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.291968 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ab7fd19c-25f8-400e-b98a-e5dd65e113ac" (UID: "ab7fd19c-25f8-400e-b98a-e5dd65e113ac"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.299561 4678 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.299632 4678 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.299643 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.299653 4678 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.299675 4678 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.299687 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phgw6\" (UniqueName: \"kubernetes.io/projected/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-kube-api-access-phgw6\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.299697 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbdtt\" (UniqueName: \"kubernetes.io/projected/a72aa4c3-72f4-473d-bf8f-a16b6d456add-kube-api-access-sbdtt\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.299712 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.303588 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-config-data" (OuterVolumeSpecName: "config-data") pod "ab7fd19c-25f8-400e-b98a-e5dd65e113ac" (UID: "ab7fd19c-25f8-400e-b98a-e5dd65e113ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.313308 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ab7fd19c-25f8-400e-b98a-e5dd65e113ac" (UID: "ab7fd19c-25f8-400e-b98a-e5dd65e113ac"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.314700 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-config-data" (OuterVolumeSpecName: "config-data") pod "a72aa4c3-72f4-473d-bf8f-a16b6d456add" (UID: "a72aa4c3-72f4-473d-bf8f-a16b6d456add"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.322267 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a72aa4c3-72f4-473d-bf8f-a16b6d456add" (UID: "a72aa4c3-72f4-473d-bf8f-a16b6d456add"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.404466 4678 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.404507 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.404517 4678 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7fd19c-25f8-400e-b98a-e5dd65e113ac-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.404527 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a72aa4c3-72f4-473d-bf8f-a16b6d456add-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.749601 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" event={"ID":"ab7fd19c-25f8-400e-b98a-e5dd65e113ac","Type":"ContainerDied","Data":"172e6c8221bfb41715afe619c134cb2fe9f7032ddc78cd8c802a02e8c21d87cb"} Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.750241 4678 scope.go:117] "RemoveContainer" containerID="4a8413214958702d07e05b1d3adbf724e5f7fa558cb7975892d3c43f6cacce03" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.749629 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-58b5bdcfc5-zwlfb" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.753490 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2b3ff76d-79e0-4f90-8b4a-7763c3ca8167","Type":"ContainerStarted","Data":"a52479ffb96992ae616ddf21d8db7ccde593881475e80f5ca651204db03035ff"} Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.753798 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.755898 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d5bd86fdc-h8dll" event={"ID":"a72aa4c3-72f4-473d-bf8f-a16b6d456add","Type":"ContainerDied","Data":"7c60c88ee4f2d2ceb8f21b875ef2a3fdb9a322888207af12e28508da0484d79d"} Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.755956 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d5bd86fdc-h8dll" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.783936 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.783907588 podStartE2EDuration="39.783907588s" podCreationTimestamp="2025-11-24 11:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 11:42:07.772399019 +0000 UTC m=+1538.703458658" watchObservedRunningTime="2025-11-24 11:42:07.783907588 +0000 UTC m=+1538.714967227" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.798689 4678 scope.go:117] "RemoveContainer" containerID="70127dabb05c80deebbf61255958491260ab9ab73ce1030ea5f8b33914502887" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.808014 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-58b5bdcfc5-zwlfb"] Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.818487 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-58b5bdcfc5-zwlfb"] Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.829478 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6d5bd86fdc-h8dll"] Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.840369 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6d5bd86fdc-h8dll"] Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.962346 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a72aa4c3-72f4-473d-bf8f-a16b6d456add" path="/var/lib/kubelet/pods/a72aa4c3-72f4-473d-bf8f-a16b6d456add/volumes" Nov 24 11:42:07 crc kubenswrapper[4678]: I1124 11:42:07.963190 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab7fd19c-25f8-400e-b98a-e5dd65e113ac" path="/var/lib/kubelet/pods/ab7fd19c-25f8-400e-b98a-e5dd65e113ac/volumes" Nov 24 11:42:09 crc kubenswrapper[4678]: I1124 11:42:09.259036 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-666c8594cc-27c89" Nov 24 11:42:09 crc kubenswrapper[4678]: I1124 11:42:09.310042 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-868b8dc7c4-6g2qc"] Nov 24 11:42:09 crc kubenswrapper[4678]: I1124 11:42:09.310262 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-868b8dc7c4-6g2qc" podUID="dbe18e72-2389-4b2f-8819-29d70cdc5965" containerName="heat-engine" containerID="cri-o://afa143c9fa46f973a488475e55fb20fe23a9c38f1ccfd6d3137a6879cd7ea6e9" gracePeriod=60 Nov 24 11:42:11 crc kubenswrapper[4678]: I1124 11:42:11.919823 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.320867 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-rzwfw"] Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.335172 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-rzwfw"] Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.413838 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-26ncd"] Nov 24 11:42:12 crc kubenswrapper[4678]: E1124 11:42:12.414505 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72aa4c3-72f4-473d-bf8f-a16b6d456add" containerName="heat-api" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.414522 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72aa4c3-72f4-473d-bf8f-a16b6d456add" containerName="heat-api" Nov 24 11:42:12 crc kubenswrapper[4678]: E1124 11:42:12.414536 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7fd19c-25f8-400e-b98a-e5dd65e113ac" containerName="heat-cfnapi" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.414544 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7fd19c-25f8-400e-b98a-e5dd65e113ac" containerName="heat-cfnapi" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.414813 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="a72aa4c3-72f4-473d-bf8f-a16b6d456add" containerName="heat-api" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.414846 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7fd19c-25f8-400e-b98a-e5dd65e113ac" containerName="heat-cfnapi" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.418953 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-26ncd" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.425202 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.456940 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-26ncd"] Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.535060 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-scripts\") pod \"aodh-db-sync-26ncd\" (UID: \"88740e07-191d-494a-bba6-3b0c5f3a9b12\") " pod="openstack/aodh-db-sync-26ncd" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.535425 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-combined-ca-bundle\") pod \"aodh-db-sync-26ncd\" (UID: \"88740e07-191d-494a-bba6-3b0c5f3a9b12\") " pod="openstack/aodh-db-sync-26ncd" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.535693 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-config-data\") pod \"aodh-db-sync-26ncd\" (UID: \"88740e07-191d-494a-bba6-3b0c5f3a9b12\") " pod="openstack/aodh-db-sync-26ncd" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.535736 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsvzv\" (UniqueName: \"kubernetes.io/projected/88740e07-191d-494a-bba6-3b0c5f3a9b12-kube-api-access-tsvzv\") pod \"aodh-db-sync-26ncd\" (UID: \"88740e07-191d-494a-bba6-3b0c5f3a9b12\") " pod="openstack/aodh-db-sync-26ncd" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.638008 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-config-data\") pod \"aodh-db-sync-26ncd\" (UID: \"88740e07-191d-494a-bba6-3b0c5f3a9b12\") " pod="openstack/aodh-db-sync-26ncd" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.638061 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsvzv\" (UniqueName: \"kubernetes.io/projected/88740e07-191d-494a-bba6-3b0c5f3a9b12-kube-api-access-tsvzv\") pod \"aodh-db-sync-26ncd\" (UID: \"88740e07-191d-494a-bba6-3b0c5f3a9b12\") " pod="openstack/aodh-db-sync-26ncd" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.638119 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-scripts\") pod \"aodh-db-sync-26ncd\" (UID: \"88740e07-191d-494a-bba6-3b0c5f3a9b12\") " pod="openstack/aodh-db-sync-26ncd" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.638172 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-combined-ca-bundle\") pod \"aodh-db-sync-26ncd\" (UID: \"88740e07-191d-494a-bba6-3b0c5f3a9b12\") " pod="openstack/aodh-db-sync-26ncd" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.646352 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-combined-ca-bundle\") pod \"aodh-db-sync-26ncd\" (UID: \"88740e07-191d-494a-bba6-3b0c5f3a9b12\") " pod="openstack/aodh-db-sync-26ncd" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.649166 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-scripts\") pod \"aodh-db-sync-26ncd\" (UID: \"88740e07-191d-494a-bba6-3b0c5f3a9b12\") " pod="openstack/aodh-db-sync-26ncd" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.651576 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-config-data\") pod \"aodh-db-sync-26ncd\" (UID: \"88740e07-191d-494a-bba6-3b0c5f3a9b12\") " pod="openstack/aodh-db-sync-26ncd" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.656324 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsvzv\" (UniqueName: \"kubernetes.io/projected/88740e07-191d-494a-bba6-3b0c5f3a9b12-kube-api-access-tsvzv\") pod \"aodh-db-sync-26ncd\" (UID: \"88740e07-191d-494a-bba6-3b0c5f3a9b12\") " pod="openstack/aodh-db-sync-26ncd" Nov 24 11:42:12 crc kubenswrapper[4678]: I1124 11:42:12.748968 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-26ncd" Nov 24 11:42:13 crc kubenswrapper[4678]: I1124 11:42:13.912930 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5410b784-9693-43c3-9f8a-43084f540dc6" path="/var/lib/kubelet/pods/5410b784-9693-43c3-9f8a-43084f540dc6/volumes" Nov 24 11:42:14 crc kubenswrapper[4678]: E1124 11:42:14.446951 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="afa143c9fa46f973a488475e55fb20fe23a9c38f1ccfd6d3137a6879cd7ea6e9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 11:42:14 crc kubenswrapper[4678]: E1124 11:42:14.448784 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="afa143c9fa46f973a488475e55fb20fe23a9c38f1ccfd6d3137a6879cd7ea6e9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 11:42:14 crc kubenswrapper[4678]: E1124 11:42:14.450071 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="afa143c9fa46f973a488475e55fb20fe23a9c38f1ccfd6d3137a6879cd7ea6e9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 11:42:14 crc kubenswrapper[4678]: E1124 11:42:14.450100 4678 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-868b8dc7c4-6g2qc" podUID="dbe18e72-2389-4b2f-8819-29d70cdc5965" containerName="heat-engine" Nov 24 11:42:17 crc kubenswrapper[4678]: I1124 11:42:17.897416 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:42:17 crc kubenswrapper[4678]: E1124 11:42:17.898130 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:42:18 crc kubenswrapper[4678]: E1124 11:42:18.877963 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Nov 24 11:42:18 crc kubenswrapper[4678]: E1124 11:42:18.878133 4678 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 24 11:42:18 crc kubenswrapper[4678]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Nov 24 11:42:18 crc kubenswrapper[4678]: - hosts: all Nov 24 11:42:18 crc kubenswrapper[4678]: strategy: linear Nov 24 11:42:18 crc kubenswrapper[4678]: tasks: Nov 24 11:42:18 crc kubenswrapper[4678]: - name: Enable podified-repos Nov 24 11:42:18 crc kubenswrapper[4678]: become: true Nov 24 11:42:18 crc kubenswrapper[4678]: ansible.builtin.shell: | Nov 24 11:42:18 crc kubenswrapper[4678]: set -euxo pipefail Nov 24 11:42:18 crc kubenswrapper[4678]: pushd /var/tmp Nov 24 11:42:18 crc kubenswrapper[4678]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Nov 24 11:42:18 crc kubenswrapper[4678]: pushd repo-setup-main Nov 24 11:42:18 crc kubenswrapper[4678]: python3 -m venv ./venv Nov 24 11:42:18 crc kubenswrapper[4678]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Nov 24 11:42:18 crc kubenswrapper[4678]: ./venv/bin/repo-setup current-podified -b antelope Nov 24 11:42:18 crc kubenswrapper[4678]: popd Nov 24 11:42:18 crc kubenswrapper[4678]: rm -rf repo-setup-main Nov 24 11:42:18 crc kubenswrapper[4678]: Nov 24 11:42:18 crc kubenswrapper[4678]: Nov 24 11:42:18 crc kubenswrapper[4678]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Nov 24 11:42:18 crc kubenswrapper[4678]: edpm_override_hosts: openstack-edpm-ipam Nov 24 11:42:18 crc kubenswrapper[4678]: edpm_service_type: repo-setup Nov 24 11:42:18 crc kubenswrapper[4678]: Nov 24 11:42:18 crc kubenswrapper[4678]: Nov 24 11:42:18 crc kubenswrapper[4678]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/runner/env/ssh_key,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7tcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t_openstack(477ad805-b800-4cb5-b0ae-9fb064cc09ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Nov 24 11:42:18 crc kubenswrapper[4678]: > logger="UnhandledError" Nov 24 11:42:18 crc kubenswrapper[4678]: E1124 11:42:18.879475 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" podUID="477ad805-b800-4cb5-b0ae-9fb064cc09ee" Nov 24 11:42:18 crc kubenswrapper[4678]: I1124 11:42:18.892829 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 11:42:18 crc kubenswrapper[4678]: E1124 11:42:18.930898 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" podUID="477ad805-b800-4cb5-b0ae-9fb064cc09ee" Nov 24 11:42:19 crc kubenswrapper[4678]: I1124 11:42:19.533267 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-26ncd"] Nov 24 11:42:19 crc kubenswrapper[4678]: I1124 11:42:19.943043 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-26ncd" event={"ID":"88740e07-191d-494a-bba6-3b0c5f3a9b12","Type":"ContainerStarted","Data":"c736ec243d5741283421439ad84e09ebe7fb3cbb558b89007cec12025b6f8583"} Nov 24 11:42:21 crc kubenswrapper[4678]: I1124 11:42:21.120856 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 11:42:24 crc kubenswrapper[4678]: E1124 11:42:24.447282 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="afa143c9fa46f973a488475e55fb20fe23a9c38f1ccfd6d3137a6879cd7ea6e9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 11:42:24 crc kubenswrapper[4678]: E1124 11:42:24.448568 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="afa143c9fa46f973a488475e55fb20fe23a9c38f1ccfd6d3137a6879cd7ea6e9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 11:42:24 crc kubenswrapper[4678]: E1124 11:42:24.450199 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="afa143c9fa46f973a488475e55fb20fe23a9c38f1ccfd6d3137a6879cd7ea6e9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 11:42:24 crc kubenswrapper[4678]: E1124 11:42:24.450234 4678 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-868b8dc7c4-6g2qc" podUID="dbe18e72-2389-4b2f-8819-29d70cdc5965" containerName="heat-engine" Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.023632 4678 generic.go:334] "Generic (PLEG): container finished" podID="dbe18e72-2389-4b2f-8819-29d70cdc5965" containerID="afa143c9fa46f973a488475e55fb20fe23a9c38f1ccfd6d3137a6879cd7ea6e9" exitCode=0 Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.023761 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-868b8dc7c4-6g2qc" event={"ID":"dbe18e72-2389-4b2f-8819-29d70cdc5965","Type":"ContainerDied","Data":"afa143c9fa46f973a488475e55fb20fe23a9c38f1ccfd6d3137a6879cd7ea6e9"} Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.026110 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-26ncd" event={"ID":"88740e07-191d-494a-bba6-3b0c5f3a9b12","Type":"ContainerStarted","Data":"91872e1803e0c560b726223508d87107413852ca81dd2e986a30f9909f7ac2d0"} Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.054005 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-26ncd" podStartSLOduration=8.122538349 podStartE2EDuration="13.053981652s" podCreationTimestamp="2025-11-24 11:42:12 +0000 UTC" firstStartedPulling="2025-11-24 11:42:19.535265739 +0000 UTC m=+1550.466325378" lastFinishedPulling="2025-11-24 11:42:24.466709042 +0000 UTC m=+1555.397768681" observedRunningTime="2025-11-24 11:42:25.039039702 +0000 UTC m=+1555.970099361" watchObservedRunningTime="2025-11-24 11:42:25.053981652 +0000 UTC m=+1555.985041311" Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.213643 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.370078 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-config-data-custom\") pod \"dbe18e72-2389-4b2f-8819-29d70cdc5965\" (UID: \"dbe18e72-2389-4b2f-8819-29d70cdc5965\") " Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.370173 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4thc\" (UniqueName: \"kubernetes.io/projected/dbe18e72-2389-4b2f-8819-29d70cdc5965-kube-api-access-r4thc\") pod \"dbe18e72-2389-4b2f-8819-29d70cdc5965\" (UID: \"dbe18e72-2389-4b2f-8819-29d70cdc5965\") " Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.370317 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-config-data\") pod \"dbe18e72-2389-4b2f-8819-29d70cdc5965\" (UID: \"dbe18e72-2389-4b2f-8819-29d70cdc5965\") " Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.370487 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-combined-ca-bundle\") pod \"dbe18e72-2389-4b2f-8819-29d70cdc5965\" (UID: \"dbe18e72-2389-4b2f-8819-29d70cdc5965\") " Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.377846 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dbe18e72-2389-4b2f-8819-29d70cdc5965" (UID: "dbe18e72-2389-4b2f-8819-29d70cdc5965"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.380000 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe18e72-2389-4b2f-8819-29d70cdc5965-kube-api-access-r4thc" (OuterVolumeSpecName: "kube-api-access-r4thc") pod "dbe18e72-2389-4b2f-8819-29d70cdc5965" (UID: "dbe18e72-2389-4b2f-8819-29d70cdc5965"). InnerVolumeSpecName "kube-api-access-r4thc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.419022 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbe18e72-2389-4b2f-8819-29d70cdc5965" (UID: "dbe18e72-2389-4b2f-8819-29d70cdc5965"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.432585 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-config-data" (OuterVolumeSpecName: "config-data") pod "dbe18e72-2389-4b2f-8819-29d70cdc5965" (UID: "dbe18e72-2389-4b2f-8819-29d70cdc5965"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.472898 4678 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.472937 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4thc\" (UniqueName: \"kubernetes.io/projected/dbe18e72-2389-4b2f-8819-29d70cdc5965-kube-api-access-r4thc\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.472950 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:25 crc kubenswrapper[4678]: I1124 11:42:25.472961 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe18e72-2389-4b2f-8819-29d70cdc5965-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:26 crc kubenswrapper[4678]: I1124 11:42:26.038029 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-868b8dc7c4-6g2qc" event={"ID":"dbe18e72-2389-4b2f-8819-29d70cdc5965","Type":"ContainerDied","Data":"06b9d287e65dd4febb45c4cec9dbcb325e51c971b614dc2ad5480dbef8512674"} Nov 24 11:42:26 crc kubenswrapper[4678]: I1124 11:42:26.038068 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-868b8dc7c4-6g2qc" Nov 24 11:42:26 crc kubenswrapper[4678]: I1124 11:42:26.038525 4678 scope.go:117] "RemoveContainer" containerID="afa143c9fa46f973a488475e55fb20fe23a9c38f1ccfd6d3137a6879cd7ea6e9" Nov 24 11:42:26 crc kubenswrapper[4678]: I1124 11:42:26.079403 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-868b8dc7c4-6g2qc"] Nov 24 11:42:26 crc kubenswrapper[4678]: I1124 11:42:26.091126 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-868b8dc7c4-6g2qc"] Nov 24 11:42:27 crc kubenswrapper[4678]: I1124 11:42:27.910753 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbe18e72-2389-4b2f-8819-29d70cdc5965" path="/var/lib/kubelet/pods/dbe18e72-2389-4b2f-8819-29d70cdc5965/volumes" Nov 24 11:42:28 crc kubenswrapper[4678]: I1124 11:42:28.064175 4678 generic.go:334] "Generic (PLEG): container finished" podID="88740e07-191d-494a-bba6-3b0c5f3a9b12" containerID="91872e1803e0c560b726223508d87107413852ca81dd2e986a30f9909f7ac2d0" exitCode=0 Nov 24 11:42:28 crc kubenswrapper[4678]: I1124 11:42:28.064237 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-26ncd" event={"ID":"88740e07-191d-494a-bba6-3b0c5f3a9b12","Type":"ContainerDied","Data":"91872e1803e0c560b726223508d87107413852ca81dd2e986a30f9909f7ac2d0"} Nov 24 11:42:29 crc kubenswrapper[4678]: I1124 11:42:29.601052 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-26ncd" Nov 24 11:42:29 crc kubenswrapper[4678]: I1124 11:42:29.695457 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-combined-ca-bundle\") pod \"88740e07-191d-494a-bba6-3b0c5f3a9b12\" (UID: \"88740e07-191d-494a-bba6-3b0c5f3a9b12\") " Nov 24 11:42:29 crc kubenswrapper[4678]: I1124 11:42:29.695531 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsvzv\" (UniqueName: \"kubernetes.io/projected/88740e07-191d-494a-bba6-3b0c5f3a9b12-kube-api-access-tsvzv\") pod \"88740e07-191d-494a-bba6-3b0c5f3a9b12\" (UID: \"88740e07-191d-494a-bba6-3b0c5f3a9b12\") " Nov 24 11:42:29 crc kubenswrapper[4678]: I1124 11:42:29.695568 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-config-data\") pod \"88740e07-191d-494a-bba6-3b0c5f3a9b12\" (UID: \"88740e07-191d-494a-bba6-3b0c5f3a9b12\") " Nov 24 11:42:29 crc kubenswrapper[4678]: I1124 11:42:29.695653 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-scripts\") pod \"88740e07-191d-494a-bba6-3b0c5f3a9b12\" (UID: \"88740e07-191d-494a-bba6-3b0c5f3a9b12\") " Nov 24 11:42:29 crc kubenswrapper[4678]: I1124 11:42:29.701254 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-scripts" (OuterVolumeSpecName: "scripts") pod "88740e07-191d-494a-bba6-3b0c5f3a9b12" (UID: "88740e07-191d-494a-bba6-3b0c5f3a9b12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:29 crc kubenswrapper[4678]: I1124 11:42:29.704195 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88740e07-191d-494a-bba6-3b0c5f3a9b12-kube-api-access-tsvzv" (OuterVolumeSpecName: "kube-api-access-tsvzv") pod "88740e07-191d-494a-bba6-3b0c5f3a9b12" (UID: "88740e07-191d-494a-bba6-3b0c5f3a9b12"). InnerVolumeSpecName "kube-api-access-tsvzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:42:29 crc kubenswrapper[4678]: I1124 11:42:29.738151 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88740e07-191d-494a-bba6-3b0c5f3a9b12" (UID: "88740e07-191d-494a-bba6-3b0c5f3a9b12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:29 crc kubenswrapper[4678]: I1124 11:42:29.740363 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-config-data" (OuterVolumeSpecName: "config-data") pod "88740e07-191d-494a-bba6-3b0c5f3a9b12" (UID: "88740e07-191d-494a-bba6-3b0c5f3a9b12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:29 crc kubenswrapper[4678]: I1124 11:42:29.799223 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:29 crc kubenswrapper[4678]: I1124 11:42:29.799253 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsvzv\" (UniqueName: \"kubernetes.io/projected/88740e07-191d-494a-bba6-3b0c5f3a9b12-kube-api-access-tsvzv\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:29 crc kubenswrapper[4678]: I1124 11:42:29.799265 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:29 crc kubenswrapper[4678]: I1124 11:42:29.799273 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88740e07-191d-494a-bba6-3b0c5f3a9b12-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:30 crc kubenswrapper[4678]: I1124 11:42:30.093003 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-26ncd" event={"ID":"88740e07-191d-494a-bba6-3b0c5f3a9b12","Type":"ContainerDied","Data":"c736ec243d5741283421439ad84e09ebe7fb3cbb558b89007cec12025b6f8583"} Nov 24 11:42:30 crc kubenswrapper[4678]: I1124 11:42:30.093066 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-26ncd" Nov 24 11:42:30 crc kubenswrapper[4678]: I1124 11:42:30.093069 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c736ec243d5741283421439ad84e09ebe7fb3cbb558b89007cec12025b6f8583" Nov 24 11:42:31 crc kubenswrapper[4678]: I1124 11:42:31.897558 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:42:31 crc kubenswrapper[4678]: E1124 11:42:31.898874 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:42:32 crc kubenswrapper[4678]: I1124 11:42:32.534247 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 24 11:42:32 crc kubenswrapper[4678]: I1124 11:42:32.534530 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerName="aodh-api" containerID="cri-o://8794fb3bae779b82e15f66fe9acb61f8e75ef61f136a9c64bfd670d9407e521c" gracePeriod=30 Nov 24 11:42:32 crc kubenswrapper[4678]: I1124 11:42:32.534599 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerName="aodh-listener" containerID="cri-o://8c1f058f23d20600e023cc13524dfec570e01ffae5b8cdcf98c054b40705eace" gracePeriod=30 Nov 24 11:42:32 crc kubenswrapper[4678]: I1124 11:42:32.534638 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerName="aodh-notifier" containerID="cri-o://38d25c4465104f0c39efc2438beef8a67615df4719e4b90ee704f716cbb70f74" gracePeriod=30 Nov 24 11:42:32 crc kubenswrapper[4678]: I1124 11:42:32.534749 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerName="aodh-evaluator" containerID="cri-o://ecdacfb168c696319b3f83b8abf5157ddc7034f2fa809f7b60d0b58f8a39fbec" gracePeriod=30 Nov 24 11:42:33 crc kubenswrapper[4678]: I1124 11:42:33.133354 4678 generic.go:334] "Generic (PLEG): container finished" podID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerID="ecdacfb168c696319b3f83b8abf5157ddc7034f2fa809f7b60d0b58f8a39fbec" exitCode=0 Nov 24 11:42:33 crc kubenswrapper[4678]: I1124 11:42:33.133706 4678 generic.go:334] "Generic (PLEG): container finished" podID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerID="8794fb3bae779b82e15f66fe9acb61f8e75ef61f136a9c64bfd670d9407e521c" exitCode=0 Nov 24 11:42:33 crc kubenswrapper[4678]: I1124 11:42:33.133449 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"dede9b01-c855-46bb-b17c-3ebc79ca3ff5","Type":"ContainerDied","Data":"ecdacfb168c696319b3f83b8abf5157ddc7034f2fa809f7b60d0b58f8a39fbec"} Nov 24 11:42:33 crc kubenswrapper[4678]: I1124 11:42:33.133754 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"dede9b01-c855-46bb-b17c-3ebc79ca3ff5","Type":"ContainerDied","Data":"8794fb3bae779b82e15f66fe9acb61f8e75ef61f136a9c64bfd670d9407e521c"} Nov 24 11:42:33 crc kubenswrapper[4678]: I1124 11:42:33.699344 4678 scope.go:117] "RemoveContainer" containerID="7742601b4251af3d976d5e6333202f63a506b65a0212820e600bc68a6bf07e78" Nov 24 11:42:34 crc kubenswrapper[4678]: I1124 11:42:34.454294 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:42:35 crc kubenswrapper[4678]: I1124 11:42:35.164296 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" event={"ID":"477ad805-b800-4cb5-b0ae-9fb064cc09ee","Type":"ContainerStarted","Data":"3b9b72b5330c68fb4571f4e5bf05364e30baf7e3be4086bcd4fc9870b0e524e8"} Nov 24 11:42:35 crc kubenswrapper[4678]: I1124 11:42:35.192634 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" podStartSLOduration=2.207613091 podStartE2EDuration="34.192618653s" podCreationTimestamp="2025-11-24 11:42:01 +0000 UTC" firstStartedPulling="2025-11-24 11:42:02.466996092 +0000 UTC m=+1533.398055731" lastFinishedPulling="2025-11-24 11:42:34.452001644 +0000 UTC m=+1565.383061293" observedRunningTime="2025-11-24 11:42:35.188930115 +0000 UTC m=+1566.119989824" watchObservedRunningTime="2025-11-24 11:42:35.192618653 +0000 UTC m=+1566.123678292" Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.190975 4678 generic.go:334] "Generic (PLEG): container finished" podID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerID="8c1f058f23d20600e023cc13524dfec570e01ffae5b8cdcf98c054b40705eace" exitCode=0 Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.191223 4678 generic.go:334] "Generic (PLEG): container finished" podID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerID="38d25c4465104f0c39efc2438beef8a67615df4719e4b90ee704f716cbb70f74" exitCode=0 Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.191082 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"dede9b01-c855-46bb-b17c-3ebc79ca3ff5","Type":"ContainerDied","Data":"8c1f058f23d20600e023cc13524dfec570e01ffae5b8cdcf98c054b40705eace"} Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.191264 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"dede9b01-c855-46bb-b17c-3ebc79ca3ff5","Type":"ContainerDied","Data":"38d25c4465104f0c39efc2438beef8a67615df4719e4b90ee704f716cbb70f74"} Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.191280 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"dede9b01-c855-46bb-b17c-3ebc79ca3ff5","Type":"ContainerDied","Data":"0b62396c00505e6e48d1618b4932ce7697ea463bf56f4d2fb88df8c0e9064c41"} Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.191289 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b62396c00505e6e48d1618b4932ce7697ea463bf56f4d2fb88df8c0e9064c41" Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.269241 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.399608 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-public-tls-certs\") pod \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.399692 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-combined-ca-bundle\") pod \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.399794 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-internal-tls-certs\") pod \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.400100 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn6x7\" (UniqueName: \"kubernetes.io/projected/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-kube-api-access-qn6x7\") pod \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.400183 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-config-data\") pod \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.400304 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-scripts\") pod \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\" (UID: \"dede9b01-c855-46bb-b17c-3ebc79ca3ff5\") " Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.410912 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-scripts" (OuterVolumeSpecName: "scripts") pod "dede9b01-c855-46bb-b17c-3ebc79ca3ff5" (UID: "dede9b01-c855-46bb-b17c-3ebc79ca3ff5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.425955 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-kube-api-access-qn6x7" (OuterVolumeSpecName: "kube-api-access-qn6x7") pod "dede9b01-c855-46bb-b17c-3ebc79ca3ff5" (UID: "dede9b01-c855-46bb-b17c-3ebc79ca3ff5"). InnerVolumeSpecName "kube-api-access-qn6x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.482869 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dede9b01-c855-46bb-b17c-3ebc79ca3ff5" (UID: "dede9b01-c855-46bb-b17c-3ebc79ca3ff5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.485836 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dede9b01-c855-46bb-b17c-3ebc79ca3ff5" (UID: "dede9b01-c855-46bb-b17c-3ebc79ca3ff5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.505394 4678 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.505425 4678 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.505436 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn6x7\" (UniqueName: \"kubernetes.io/projected/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-kube-api-access-qn6x7\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.505447 4678 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.538892 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dede9b01-c855-46bb-b17c-3ebc79ca3ff5" (UID: "dede9b01-c855-46bb-b17c-3ebc79ca3ff5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.555455 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-config-data" (OuterVolumeSpecName: "config-data") pod "dede9b01-c855-46bb-b17c-3ebc79ca3ff5" (UID: "dede9b01-c855-46bb-b17c-3ebc79ca3ff5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.608096 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:37 crc kubenswrapper[4678]: I1124 11:42:37.608133 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dede9b01-c855-46bb-b17c-3ebc79ca3ff5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.217141 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.254831 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.272131 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.285611 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 24 11:42:38 crc kubenswrapper[4678]: E1124 11:42:38.286645 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerName="aodh-notifier" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.286818 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerName="aodh-notifier" Nov 24 11:42:38 crc kubenswrapper[4678]: E1124 11:42:38.286908 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerName="aodh-evaluator" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.287001 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerName="aodh-evaluator" Nov 24 11:42:38 crc kubenswrapper[4678]: E1124 11:42:38.287092 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerName="aodh-listener" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.287169 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerName="aodh-listener" Nov 24 11:42:38 crc kubenswrapper[4678]: E1124 11:42:38.287250 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerName="aodh-api" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.287317 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerName="aodh-api" Nov 24 11:42:38 crc kubenswrapper[4678]: E1124 11:42:38.287408 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe18e72-2389-4b2f-8819-29d70cdc5965" containerName="heat-engine" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.287559 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe18e72-2389-4b2f-8819-29d70cdc5965" containerName="heat-engine" Nov 24 11:42:38 crc kubenswrapper[4678]: E1124 11:42:38.287690 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88740e07-191d-494a-bba6-3b0c5f3a9b12" containerName="aodh-db-sync" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.287768 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="88740e07-191d-494a-bba6-3b0c5f3a9b12" containerName="aodh-db-sync" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.288173 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerName="aodh-notifier" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.288464 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerName="aodh-listener" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.288598 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerName="aodh-evaluator" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.288726 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="88740e07-191d-494a-bba6-3b0c5f3a9b12" containerName="aodh-db-sync" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.288812 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" containerName="aodh-api" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.288901 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe18e72-2389-4b2f-8819-29d70cdc5965" containerName="heat-engine" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.292095 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.297164 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.298888 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.299378 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.302776 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.302777 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bwbmq" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.303008 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.435930 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5lht\" (UniqueName: \"kubernetes.io/projected/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-kube-api-access-j5lht\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.435973 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-config-data\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.436138 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.436206 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-scripts\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.436607 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-public-tls-certs\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.436733 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-internal-tls-certs\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.538968 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-public-tls-certs\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.539025 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-internal-tls-certs\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.539125 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5lht\" (UniqueName: \"kubernetes.io/projected/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-kube-api-access-j5lht\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.539148 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-config-data\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.539174 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.539194 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-scripts\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.543392 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.543442 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-public-tls-certs\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.543912 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-config-data\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.544295 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-scripts\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.544321 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-internal-tls-certs\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.560579 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5lht\" (UniqueName: \"kubernetes.io/projected/5c096be8-cc8c-4b25-9a96-b64c3566f1a0-kube-api-access-j5lht\") pod \"aodh-0\" (UID: \"5c096be8-cc8c-4b25-9a96-b64c3566f1a0\") " pod="openstack/aodh-0" Nov 24 11:42:38 crc kubenswrapper[4678]: I1124 11:42:38.614039 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 11:42:39 crc kubenswrapper[4678]: W1124 11:42:39.085508 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c096be8_cc8c_4b25_9a96_b64c3566f1a0.slice/crio-456f3411ba87f113bb8488707cc8beca4ed4d63a44fb3c611f6eb40fd90762e5 WatchSource:0}: Error finding container 456f3411ba87f113bb8488707cc8beca4ed4d63a44fb3c611f6eb40fd90762e5: Status 404 returned error can't find the container with id 456f3411ba87f113bb8488707cc8beca4ed4d63a44fb3c611f6eb40fd90762e5 Nov 24 11:42:39 crc kubenswrapper[4678]: I1124 11:42:39.086980 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 11:42:39 crc kubenswrapper[4678]: I1124 11:42:39.236900 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5c096be8-cc8c-4b25-9a96-b64c3566f1a0","Type":"ContainerStarted","Data":"456f3411ba87f113bb8488707cc8beca4ed4d63a44fb3c611f6eb40fd90762e5"} Nov 24 11:42:39 crc kubenswrapper[4678]: I1124 11:42:39.914350 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dede9b01-c855-46bb-b17c-3ebc79ca3ff5" path="/var/lib/kubelet/pods/dede9b01-c855-46bb-b17c-3ebc79ca3ff5/volumes" Nov 24 11:42:40 crc kubenswrapper[4678]: I1124 11:42:40.257046 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5c096be8-cc8c-4b25-9a96-b64c3566f1a0","Type":"ContainerStarted","Data":"9794c8d3f0f7d90719191c79348349f7c122cd5f655c45a4ef46dd694bec6fcc"} Nov 24 11:42:41 crc kubenswrapper[4678]: I1124 11:42:41.275572 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5c096be8-cc8c-4b25-9a96-b64c3566f1a0","Type":"ContainerStarted","Data":"8e260512e123c5c507972314b1366e66ec4f56e77afb358f5739bbf793b85eec"} Nov 24 11:42:42 crc kubenswrapper[4678]: I1124 11:42:42.302817 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5c096be8-cc8c-4b25-9a96-b64c3566f1a0","Type":"ContainerStarted","Data":"7ac9dd307ddc9b05ab0931b231388d4a579489b41c2aaf48129693f6257ef289"} Nov 24 11:42:43 crc kubenswrapper[4678]: I1124 11:42:43.319187 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5c096be8-cc8c-4b25-9a96-b64c3566f1a0","Type":"ContainerStarted","Data":"ea2c0b0622145ba2d6c01f3064d3fa93e99897bb6f8050413963b05fc01bca64"} Nov 24 11:42:43 crc kubenswrapper[4678]: I1124 11:42:43.348348 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.627294877 podStartE2EDuration="5.348327184s" podCreationTimestamp="2025-11-24 11:42:38 +0000 UTC" firstStartedPulling="2025-11-24 11:42:39.089349891 +0000 UTC m=+1570.020409530" lastFinishedPulling="2025-11-24 11:42:42.810382198 +0000 UTC m=+1573.741441837" observedRunningTime="2025-11-24 11:42:43.341936173 +0000 UTC m=+1574.272995862" watchObservedRunningTime="2025-11-24 11:42:43.348327184 +0000 UTC m=+1574.279386823" Nov 24 11:42:46 crc kubenswrapper[4678]: I1124 11:42:46.354733 4678 generic.go:334] "Generic (PLEG): container finished" podID="477ad805-b800-4cb5-b0ae-9fb064cc09ee" containerID="3b9b72b5330c68fb4571f4e5bf05364e30baf7e3be4086bcd4fc9870b0e524e8" exitCode=0 Nov 24 11:42:46 crc kubenswrapper[4678]: I1124 11:42:46.355036 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" event={"ID":"477ad805-b800-4cb5-b0ae-9fb064cc09ee","Type":"ContainerDied","Data":"3b9b72b5330c68fb4571f4e5bf05364e30baf7e3be4086bcd4fc9870b0e524e8"} Nov 24 11:42:46 crc kubenswrapper[4678]: I1124 11:42:46.895356 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:42:46 crc kubenswrapper[4678]: E1124 11:42:46.895693 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:42:47 crc kubenswrapper[4678]: I1124 11:42:47.935774 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.060189 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-inventory\") pod \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\" (UID: \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\") " Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.060300 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-ssh-key\") pod \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\" (UID: \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\") " Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.060398 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-repo-setup-combined-ca-bundle\") pod \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\" (UID: \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\") " Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.060524 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7tcc\" (UniqueName: \"kubernetes.io/projected/477ad805-b800-4cb5-b0ae-9fb064cc09ee-kube-api-access-k7tcc\") pod \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\" (UID: \"477ad805-b800-4cb5-b0ae-9fb064cc09ee\") " Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.067065 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/477ad805-b800-4cb5-b0ae-9fb064cc09ee-kube-api-access-k7tcc" (OuterVolumeSpecName: "kube-api-access-k7tcc") pod "477ad805-b800-4cb5-b0ae-9fb064cc09ee" (UID: "477ad805-b800-4cb5-b0ae-9fb064cc09ee"). InnerVolumeSpecName "kube-api-access-k7tcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.068593 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "477ad805-b800-4cb5-b0ae-9fb064cc09ee" (UID: "477ad805-b800-4cb5-b0ae-9fb064cc09ee"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.093598 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "477ad805-b800-4cb5-b0ae-9fb064cc09ee" (UID: "477ad805-b800-4cb5-b0ae-9fb064cc09ee"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.096824 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-inventory" (OuterVolumeSpecName: "inventory") pod "477ad805-b800-4cb5-b0ae-9fb064cc09ee" (UID: "477ad805-b800-4cb5-b0ae-9fb064cc09ee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.163107 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.163140 4678 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.163153 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7tcc\" (UniqueName: \"kubernetes.io/projected/477ad805-b800-4cb5-b0ae-9fb064cc09ee-kube-api-access-k7tcc\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.163163 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/477ad805-b800-4cb5-b0ae-9fb064cc09ee-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.386515 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" event={"ID":"477ad805-b800-4cb5-b0ae-9fb064cc09ee","Type":"ContainerDied","Data":"355b8ea604ed9b45cd4326cb26280ebbfdcef8be9cec728385bea66dadc4c03a"} Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.386914 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="355b8ea604ed9b45cd4326cb26280ebbfdcef8be9cec728385bea66dadc4c03a" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.387009 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.488685 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m"] Nov 24 11:42:48 crc kubenswrapper[4678]: E1124 11:42:48.489784 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477ad805-b800-4cb5-b0ae-9fb064cc09ee" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.489814 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="477ad805-b800-4cb5-b0ae-9fb064cc09ee" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.490224 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="477ad805-b800-4cb5-b0ae-9fb064cc09ee" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.491708 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.497443 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.497453 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.497473 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.499443 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.505011 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m"] Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.572056 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc2q9\" (UniqueName: \"kubernetes.io/projected/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-kube-api-access-lc2q9\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mz67m\" (UID: \"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.572142 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mz67m\" (UID: \"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.572459 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mz67m\" (UID: \"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.674460 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mz67m\" (UID: \"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.674625 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mz67m\" (UID: \"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.674760 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc2q9\" (UniqueName: \"kubernetes.io/projected/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-kube-api-access-lc2q9\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mz67m\" (UID: \"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.679576 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mz67m\" (UID: \"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.679844 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mz67m\" (UID: \"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.694803 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc2q9\" (UniqueName: \"kubernetes.io/projected/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-kube-api-access-lc2q9\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mz67m\" (UID: \"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" Nov 24 11:42:48 crc kubenswrapper[4678]: I1124 11:42:48.821233 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" Nov 24 11:42:49 crc kubenswrapper[4678]: I1124 11:42:49.431020 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m"] Nov 24 11:42:50 crc kubenswrapper[4678]: I1124 11:42:50.413367 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" event={"ID":"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d","Type":"ContainerStarted","Data":"85eb8b5bb0686164b31703bd44b64979b25b36942a20a0c22607acde5b573afb"} Nov 24 11:42:50 crc kubenswrapper[4678]: I1124 11:42:50.413700 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" event={"ID":"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d","Type":"ContainerStarted","Data":"b843a42aaf169b93d44a7f34e94e7ac7feac2a2b24c6f4bae346da75efa6fa51"} Nov 24 11:42:50 crc kubenswrapper[4678]: I1124 11:42:50.432592 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" podStartSLOduration=1.984637832 podStartE2EDuration="2.432576791s" podCreationTimestamp="2025-11-24 11:42:48 +0000 UTC" firstStartedPulling="2025-11-24 11:42:49.432282558 +0000 UTC m=+1580.363342207" lastFinishedPulling="2025-11-24 11:42:49.880221527 +0000 UTC m=+1580.811281166" observedRunningTime="2025-11-24 11:42:50.428017979 +0000 UTC m=+1581.359077618" watchObservedRunningTime="2025-11-24 11:42:50.432576791 +0000 UTC m=+1581.363636430" Nov 24 11:42:53 crc kubenswrapper[4678]: I1124 11:42:53.452095 4678 generic.go:334] "Generic (PLEG): container finished" podID="85b55648-6ef0-4b5f-aa62-c0cadcc6d66d" containerID="85eb8b5bb0686164b31703bd44b64979b25b36942a20a0c22607acde5b573afb" exitCode=0 Nov 24 11:42:53 crc kubenswrapper[4678]: I1124 11:42:53.452187 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" event={"ID":"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d","Type":"ContainerDied","Data":"85eb8b5bb0686164b31703bd44b64979b25b36942a20a0c22607acde5b573afb"} Nov 24 11:42:54 crc kubenswrapper[4678]: I1124 11:42:54.947507 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.067881 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc2q9\" (UniqueName: \"kubernetes.io/projected/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-kube-api-access-lc2q9\") pod \"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d\" (UID: \"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d\") " Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.067992 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-inventory\") pod \"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d\" (UID: \"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d\") " Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.068062 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-ssh-key\") pod \"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d\" (UID: \"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d\") " Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.075171 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-kube-api-access-lc2q9" (OuterVolumeSpecName: "kube-api-access-lc2q9") pod "85b55648-6ef0-4b5f-aa62-c0cadcc6d66d" (UID: "85b55648-6ef0-4b5f-aa62-c0cadcc6d66d"). InnerVolumeSpecName "kube-api-access-lc2q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.102604 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "85b55648-6ef0-4b5f-aa62-c0cadcc6d66d" (UID: "85b55648-6ef0-4b5f-aa62-c0cadcc6d66d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.136496 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-inventory" (OuterVolumeSpecName: "inventory") pod "85b55648-6ef0-4b5f-aa62-c0cadcc6d66d" (UID: "85b55648-6ef0-4b5f-aa62-c0cadcc6d66d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.170562 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc2q9\" (UniqueName: \"kubernetes.io/projected/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-kube-api-access-lc2q9\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.170598 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.170612 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85b55648-6ef0-4b5f-aa62-c0cadcc6d66d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.491829 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" event={"ID":"85b55648-6ef0-4b5f-aa62-c0cadcc6d66d","Type":"ContainerDied","Data":"b843a42aaf169b93d44a7f34e94e7ac7feac2a2b24c6f4bae346da75efa6fa51"} Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.491882 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b843a42aaf169b93d44a7f34e94e7ac7feac2a2b24c6f4bae346da75efa6fa51" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.491953 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mz67m" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.554831 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt"] Nov 24 11:42:55 crc kubenswrapper[4678]: E1124 11:42:55.555317 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b55648-6ef0-4b5f-aa62-c0cadcc6d66d" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.555336 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b55648-6ef0-4b5f-aa62-c0cadcc6d66d" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.555608 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b55648-6ef0-4b5f-aa62-c0cadcc6d66d" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.556386 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.561562 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.562455 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.562575 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.562796 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.569154 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt"] Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.682215 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt\" (UID: \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.682256 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clsg6\" (UniqueName: \"kubernetes.io/projected/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-kube-api-access-clsg6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt\" (UID: \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.682323 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt\" (UID: \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.682539 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt\" (UID: \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.785105 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt\" (UID: \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.785159 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clsg6\" (UniqueName: \"kubernetes.io/projected/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-kube-api-access-clsg6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt\" (UID: \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.785235 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt\" (UID: \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.785357 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt\" (UID: \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.789810 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt\" (UID: \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.791477 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt\" (UID: \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.791950 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt\" (UID: \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.808227 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clsg6\" (UniqueName: \"kubernetes.io/projected/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-kube-api-access-clsg6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt\" (UID: \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" Nov 24 11:42:55 crc kubenswrapper[4678]: I1124 11:42:55.883190 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" Nov 24 11:42:56 crc kubenswrapper[4678]: I1124 11:42:56.441110 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt"] Nov 24 11:42:56 crc kubenswrapper[4678]: I1124 11:42:56.504444 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" event={"ID":"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c","Type":"ContainerStarted","Data":"27a94511f0d1785282c981ab5261f074be7e5d0fbc85fa45506f3d906e1330af"} Nov 24 11:42:57 crc kubenswrapper[4678]: I1124 11:42:57.516349 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" event={"ID":"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c","Type":"ContainerStarted","Data":"fe373e571621be1f8d1bc447ad9347eb29e804af765d5df33b9bf2e482ef5fb1"} Nov 24 11:42:57 crc kubenswrapper[4678]: I1124 11:42:57.551000 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" podStartSLOduration=2.078625449 podStartE2EDuration="2.550978974s" podCreationTimestamp="2025-11-24 11:42:55 +0000 UTC" firstStartedPulling="2025-11-24 11:42:56.437747229 +0000 UTC m=+1587.368806868" lastFinishedPulling="2025-11-24 11:42:56.910100754 +0000 UTC m=+1587.841160393" observedRunningTime="2025-11-24 11:42:57.532019755 +0000 UTC m=+1588.463079394" watchObservedRunningTime="2025-11-24 11:42:57.550978974 +0000 UTC m=+1588.482038623" Nov 24 11:42:59 crc kubenswrapper[4678]: I1124 11:42:59.906216 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:42:59 crc kubenswrapper[4678]: E1124 11:42:59.907319 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:43:13 crc kubenswrapper[4678]: I1124 11:43:13.895468 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:43:13 crc kubenswrapper[4678]: E1124 11:43:13.896328 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:43:24 crc kubenswrapper[4678]: I1124 11:43:24.898540 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:43:24 crc kubenswrapper[4678]: E1124 11:43:24.900293 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:43:33 crc kubenswrapper[4678]: I1124 11:43:33.950255 4678 scope.go:117] "RemoveContainer" containerID="9a57756f7447c44c2fcb5a2fd3cfd7f2bd3fd44b62a4fd7bf70162e48c6d1627" Nov 24 11:43:37 crc kubenswrapper[4678]: I1124 11:43:37.895699 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:43:37 crc kubenswrapper[4678]: E1124 11:43:37.896596 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:43:51 crc kubenswrapper[4678]: I1124 11:43:51.896965 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:43:51 crc kubenswrapper[4678]: E1124 11:43:51.898256 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:44:04 crc kubenswrapper[4678]: I1124 11:44:04.895549 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:44:04 crc kubenswrapper[4678]: E1124 11:44:04.896486 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:44:19 crc kubenswrapper[4678]: I1124 11:44:19.903499 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:44:19 crc kubenswrapper[4678]: E1124 11:44:19.904597 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:44:32 crc kubenswrapper[4678]: I1124 11:44:32.896563 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:44:32 crc kubenswrapper[4678]: E1124 11:44:32.897580 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:44:34 crc kubenswrapper[4678]: I1124 11:44:34.099181 4678 scope.go:117] "RemoveContainer" containerID="b8beabc8c137bd68cdc9d83cced8b55faaac6d260af150e9c7974af1c7cb1374" Nov 24 11:44:34 crc kubenswrapper[4678]: I1124 11:44:34.145010 4678 scope.go:117] "RemoveContainer" containerID="0146b480d3a5f09b8eccf47c2ede2fd87021480f5bf7b5d1a65e7559d2e743d8" Nov 24 11:44:34 crc kubenswrapper[4678]: I1124 11:44:34.186572 4678 scope.go:117] "RemoveContainer" containerID="50b408996eabd8bc0e5b0d4f53e3cb30296cb8743c1b755d2a615a76ed7f92a7" Nov 24 11:44:44 crc kubenswrapper[4678]: I1124 11:44:44.896493 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:44:44 crc kubenswrapper[4678]: E1124 11:44:44.898661 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:44:59 crc kubenswrapper[4678]: I1124 11:44:59.919423 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:44:59 crc kubenswrapper[4678]: E1124 11:44:59.921832 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:45:00 crc kubenswrapper[4678]: I1124 11:45:00.173860 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx"] Nov 24 11:45:00 crc kubenswrapper[4678]: I1124 11:45:00.176662 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx" Nov 24 11:45:00 crc kubenswrapper[4678]: I1124 11:45:00.184901 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 11:45:00 crc kubenswrapper[4678]: I1124 11:45:00.185487 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 11:45:00 crc kubenswrapper[4678]: I1124 11:45:00.191443 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx"] Nov 24 11:45:00 crc kubenswrapper[4678]: I1124 11:45:00.292584 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67504c69-9aa8-4c55-8e64-fbb6291254e5-config-volume\") pod \"collect-profiles-29399745-8sqgx\" (UID: \"67504c69-9aa8-4c55-8e64-fbb6291254e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx" Nov 24 11:45:00 crc kubenswrapper[4678]: I1124 11:45:00.292653 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67504c69-9aa8-4c55-8e64-fbb6291254e5-secret-volume\") pod \"collect-profiles-29399745-8sqgx\" (UID: \"67504c69-9aa8-4c55-8e64-fbb6291254e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx" Nov 24 11:45:00 crc kubenswrapper[4678]: I1124 11:45:00.292698 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqmqb\" (UniqueName: \"kubernetes.io/projected/67504c69-9aa8-4c55-8e64-fbb6291254e5-kube-api-access-kqmqb\") pod \"collect-profiles-29399745-8sqgx\" (UID: \"67504c69-9aa8-4c55-8e64-fbb6291254e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx" Nov 24 11:45:00 crc kubenswrapper[4678]: I1124 11:45:00.395115 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67504c69-9aa8-4c55-8e64-fbb6291254e5-config-volume\") pod \"collect-profiles-29399745-8sqgx\" (UID: \"67504c69-9aa8-4c55-8e64-fbb6291254e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx" Nov 24 11:45:00 crc kubenswrapper[4678]: I1124 11:45:00.395422 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67504c69-9aa8-4c55-8e64-fbb6291254e5-secret-volume\") pod \"collect-profiles-29399745-8sqgx\" (UID: \"67504c69-9aa8-4c55-8e64-fbb6291254e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx" Nov 24 11:45:00 crc kubenswrapper[4678]: I1124 11:45:00.395545 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqmqb\" (UniqueName: \"kubernetes.io/projected/67504c69-9aa8-4c55-8e64-fbb6291254e5-kube-api-access-kqmqb\") pod \"collect-profiles-29399745-8sqgx\" (UID: \"67504c69-9aa8-4c55-8e64-fbb6291254e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx" Nov 24 11:45:00 crc kubenswrapper[4678]: I1124 11:45:00.396088 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67504c69-9aa8-4c55-8e64-fbb6291254e5-config-volume\") pod \"collect-profiles-29399745-8sqgx\" (UID: \"67504c69-9aa8-4c55-8e64-fbb6291254e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx" Nov 24 11:45:00 crc kubenswrapper[4678]: I1124 11:45:00.403382 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67504c69-9aa8-4c55-8e64-fbb6291254e5-secret-volume\") pod \"collect-profiles-29399745-8sqgx\" (UID: \"67504c69-9aa8-4c55-8e64-fbb6291254e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx" Nov 24 11:45:00 crc kubenswrapper[4678]: I1124 11:45:00.412481 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqmqb\" (UniqueName: \"kubernetes.io/projected/67504c69-9aa8-4c55-8e64-fbb6291254e5-kube-api-access-kqmqb\") pod \"collect-profiles-29399745-8sqgx\" (UID: \"67504c69-9aa8-4c55-8e64-fbb6291254e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx" Nov 24 11:45:00 crc kubenswrapper[4678]: I1124 11:45:00.499889 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx" Nov 24 11:45:00 crc kubenswrapper[4678]: I1124 11:45:00.991651 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx"] Nov 24 11:45:00 crc kubenswrapper[4678]: W1124 11:45:00.999605 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67504c69_9aa8_4c55_8e64_fbb6291254e5.slice/crio-63038524c3626c5dfdc0c4c3266b8d004d395d20f1fbc50244502b96048176ae WatchSource:0}: Error finding container 63038524c3626c5dfdc0c4c3266b8d004d395d20f1fbc50244502b96048176ae: Status 404 returned error can't find the container with id 63038524c3626c5dfdc0c4c3266b8d004d395d20f1fbc50244502b96048176ae Nov 24 11:45:01 crc kubenswrapper[4678]: I1124 11:45:01.193545 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx" event={"ID":"67504c69-9aa8-4c55-8e64-fbb6291254e5","Type":"ContainerStarted","Data":"63038524c3626c5dfdc0c4c3266b8d004d395d20f1fbc50244502b96048176ae"} Nov 24 11:45:02 crc kubenswrapper[4678]: I1124 11:45:02.213119 4678 generic.go:334] "Generic (PLEG): container finished" podID="67504c69-9aa8-4c55-8e64-fbb6291254e5" containerID="1ded26ce454bc6632d25fe34ccf49bcf3287ac60447eba91aa7c5f521fc616e8" exitCode=0 Nov 24 11:45:02 crc kubenswrapper[4678]: I1124 11:45:02.213382 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx" event={"ID":"67504c69-9aa8-4c55-8e64-fbb6291254e5","Type":"ContainerDied","Data":"1ded26ce454bc6632d25fe34ccf49bcf3287ac60447eba91aa7c5f521fc616e8"} Nov 24 11:45:03 crc kubenswrapper[4678]: I1124 11:45:03.590837 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx" Nov 24 11:45:03 crc kubenswrapper[4678]: I1124 11:45:03.690587 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67504c69-9aa8-4c55-8e64-fbb6291254e5-config-volume\") pod \"67504c69-9aa8-4c55-8e64-fbb6291254e5\" (UID: \"67504c69-9aa8-4c55-8e64-fbb6291254e5\") " Nov 24 11:45:03 crc kubenswrapper[4678]: I1124 11:45:03.691105 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqmqb\" (UniqueName: \"kubernetes.io/projected/67504c69-9aa8-4c55-8e64-fbb6291254e5-kube-api-access-kqmqb\") pod \"67504c69-9aa8-4c55-8e64-fbb6291254e5\" (UID: \"67504c69-9aa8-4c55-8e64-fbb6291254e5\") " Nov 24 11:45:03 crc kubenswrapper[4678]: I1124 11:45:03.691260 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67504c69-9aa8-4c55-8e64-fbb6291254e5-secret-volume\") pod \"67504c69-9aa8-4c55-8e64-fbb6291254e5\" (UID: \"67504c69-9aa8-4c55-8e64-fbb6291254e5\") " Nov 24 11:45:03 crc kubenswrapper[4678]: I1124 11:45:03.692275 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67504c69-9aa8-4c55-8e64-fbb6291254e5-config-volume" (OuterVolumeSpecName: "config-volume") pod "67504c69-9aa8-4c55-8e64-fbb6291254e5" (UID: "67504c69-9aa8-4c55-8e64-fbb6291254e5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:45:03 crc kubenswrapper[4678]: I1124 11:45:03.698169 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67504c69-9aa8-4c55-8e64-fbb6291254e5-kube-api-access-kqmqb" (OuterVolumeSpecName: "kube-api-access-kqmqb") pod "67504c69-9aa8-4c55-8e64-fbb6291254e5" (UID: "67504c69-9aa8-4c55-8e64-fbb6291254e5"). InnerVolumeSpecName "kube-api-access-kqmqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:45:03 crc kubenswrapper[4678]: I1124 11:45:03.698364 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67504c69-9aa8-4c55-8e64-fbb6291254e5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "67504c69-9aa8-4c55-8e64-fbb6291254e5" (UID: "67504c69-9aa8-4c55-8e64-fbb6291254e5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:45:03 crc kubenswrapper[4678]: I1124 11:45:03.794091 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqmqb\" (UniqueName: \"kubernetes.io/projected/67504c69-9aa8-4c55-8e64-fbb6291254e5-kube-api-access-kqmqb\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:03 crc kubenswrapper[4678]: I1124 11:45:03.794352 4678 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67504c69-9aa8-4c55-8e64-fbb6291254e5-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:03 crc kubenswrapper[4678]: I1124 11:45:03.794452 4678 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67504c69-9aa8-4c55-8e64-fbb6291254e5-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 11:45:04 crc kubenswrapper[4678]: I1124 11:45:04.237199 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx" event={"ID":"67504c69-9aa8-4c55-8e64-fbb6291254e5","Type":"ContainerDied","Data":"63038524c3626c5dfdc0c4c3266b8d004d395d20f1fbc50244502b96048176ae"} Nov 24 11:45:04 crc kubenswrapper[4678]: I1124 11:45:04.237242 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63038524c3626c5dfdc0c4c3266b8d004d395d20f1fbc50244502b96048176ae" Nov 24 11:45:04 crc kubenswrapper[4678]: I1124 11:45:04.237266 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx" Nov 24 11:45:14 crc kubenswrapper[4678]: I1124 11:45:14.896901 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:45:14 crc kubenswrapper[4678]: E1124 11:45:14.897769 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:45:26 crc kubenswrapper[4678]: I1124 11:45:26.895852 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:45:26 crc kubenswrapper[4678]: E1124 11:45:26.897120 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:45:41 crc kubenswrapper[4678]: I1124 11:45:41.896609 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:45:41 crc kubenswrapper[4678]: E1124 11:45:41.898964 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:45:57 crc kubenswrapper[4678]: I1124 11:45:57.896853 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:45:57 crc kubenswrapper[4678]: E1124 11:45:57.897681 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:46:08 crc kubenswrapper[4678]: I1124 11:46:08.896548 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:46:09 crc kubenswrapper[4678]: I1124 11:46:09.061291 4678 generic.go:334] "Generic (PLEG): container finished" podID="d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c" containerID="fe373e571621be1f8d1bc447ad9347eb29e804af765d5df33b9bf2e482ef5fb1" exitCode=0 Nov 24 11:46:09 crc kubenswrapper[4678]: I1124 11:46:09.061548 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" event={"ID":"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c","Type":"ContainerDied","Data":"fe373e571621be1f8d1bc447ad9347eb29e804af765d5df33b9bf2e482ef5fb1"} Nov 24 11:46:10 crc kubenswrapper[4678]: I1124 11:46:10.077654 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"07f7b4bf38854f595d8be8c0fa05f91ad02239dc235ff30184b0ce433099dc00"} Nov 24 11:46:10 crc kubenswrapper[4678]: I1124 11:46:10.683449 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" Nov 24 11:46:10 crc kubenswrapper[4678]: I1124 11:46:10.806324 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-inventory\") pod \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\" (UID: \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\") " Nov 24 11:46:10 crc kubenswrapper[4678]: I1124 11:46:10.806524 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-bootstrap-combined-ca-bundle\") pod \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\" (UID: \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\") " Nov 24 11:46:10 crc kubenswrapper[4678]: I1124 11:46:10.806635 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clsg6\" (UniqueName: \"kubernetes.io/projected/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-kube-api-access-clsg6\") pod \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\" (UID: \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\") " Nov 24 11:46:10 crc kubenswrapper[4678]: I1124 11:46:10.806759 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-ssh-key\") pod \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\" (UID: \"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c\") " Nov 24 11:46:10 crc kubenswrapper[4678]: I1124 11:46:10.814498 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-kube-api-access-clsg6" (OuterVolumeSpecName: "kube-api-access-clsg6") pod "d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c" (UID: "d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c"). InnerVolumeSpecName "kube-api-access-clsg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:46:10 crc kubenswrapper[4678]: I1124 11:46:10.815401 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c" (UID: "d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:10 crc kubenswrapper[4678]: I1124 11:46:10.846915 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-inventory" (OuterVolumeSpecName: "inventory") pod "d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c" (UID: "d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:10 crc kubenswrapper[4678]: I1124 11:46:10.869560 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c" (UID: "d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:46:10 crc kubenswrapper[4678]: I1124 11:46:10.909237 4678 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:10 crc kubenswrapper[4678]: I1124 11:46:10.909278 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clsg6\" (UniqueName: \"kubernetes.io/projected/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-kube-api-access-clsg6\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:10 crc kubenswrapper[4678]: I1124 11:46:10.909292 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:10 crc kubenswrapper[4678]: I1124 11:46:10.909308 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.091784 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" event={"ID":"d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c","Type":"ContainerDied","Data":"27a94511f0d1785282c981ab5261f074be7e5d0fbc85fa45506f3d906e1330af"} Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.091834 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27a94511f0d1785282c981ab5261f074be7e5d0fbc85fa45506f3d906e1330af" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.091879 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.209793 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj"] Nov 24 11:46:11 crc kubenswrapper[4678]: E1124 11:46:11.210715 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.210729 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 11:46:11 crc kubenswrapper[4678]: E1124 11:46:11.210757 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67504c69-9aa8-4c55-8e64-fbb6291254e5" containerName="collect-profiles" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.210763 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="67504c69-9aa8-4c55-8e64-fbb6291254e5" containerName="collect-profiles" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.211048 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.211066 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="67504c69-9aa8-4c55-8e64-fbb6291254e5" containerName="collect-profiles" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.211918 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.219234 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.219545 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.219721 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.219812 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.232885 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj"] Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.319853 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgqc9\" (UniqueName: \"kubernetes.io/projected/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-kube-api-access-qgqc9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-54bsj\" (UID: \"e3962a1c-012b-4c17-85d3-bf3f2f5b6147\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.320208 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-54bsj\" (UID: \"e3962a1c-012b-4c17-85d3-bf3f2f5b6147\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.320348 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-54bsj\" (UID: \"e3962a1c-012b-4c17-85d3-bf3f2f5b6147\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.422664 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-54bsj\" (UID: \"e3962a1c-012b-4c17-85d3-bf3f2f5b6147\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.422949 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgqc9\" (UniqueName: \"kubernetes.io/projected/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-kube-api-access-qgqc9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-54bsj\" (UID: \"e3962a1c-012b-4c17-85d3-bf3f2f5b6147\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.423073 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-54bsj\" (UID: \"e3962a1c-012b-4c17-85d3-bf3f2f5b6147\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.427440 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-54bsj\" (UID: \"e3962a1c-012b-4c17-85d3-bf3f2f5b6147\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.441876 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-54bsj\" (UID: \"e3962a1c-012b-4c17-85d3-bf3f2f5b6147\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.444495 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgqc9\" (UniqueName: \"kubernetes.io/projected/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-kube-api-access-qgqc9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-54bsj\" (UID: \"e3962a1c-012b-4c17-85d3-bf3f2f5b6147\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" Nov 24 11:46:11 crc kubenswrapper[4678]: I1124 11:46:11.533751 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" Nov 24 11:46:12 crc kubenswrapper[4678]: I1124 11:46:12.162906 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj"] Nov 24 11:46:13 crc kubenswrapper[4678]: I1124 11:46:13.141911 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" event={"ID":"e3962a1c-012b-4c17-85d3-bf3f2f5b6147","Type":"ContainerStarted","Data":"5fb38a67a4f204c0dfd786babb40a159629615f844be60939fe87dc2f13c9fde"} Nov 24 11:46:14 crc kubenswrapper[4678]: I1124 11:46:14.183467 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" event={"ID":"e3962a1c-012b-4c17-85d3-bf3f2f5b6147","Type":"ContainerStarted","Data":"31992a252a8d16108f191841f27b3f53834ca1ccfbb62f4b6a4182b058f913bf"} Nov 24 11:46:14 crc kubenswrapper[4678]: I1124 11:46:14.210843 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" podStartSLOduration=2.637040522 podStartE2EDuration="3.210820996s" podCreationTimestamp="2025-11-24 11:46:11 +0000 UTC" firstStartedPulling="2025-11-24 11:46:12.159617711 +0000 UTC m=+1783.090677350" lastFinishedPulling="2025-11-24 11:46:12.733398145 +0000 UTC m=+1783.664457824" observedRunningTime="2025-11-24 11:46:14.205201455 +0000 UTC m=+1785.136261104" watchObservedRunningTime="2025-11-24 11:46:14.210820996 +0000 UTC m=+1785.141880635" Nov 24 11:46:22 crc kubenswrapper[4678]: I1124 11:46:22.076618 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-5vx7g"] Nov 24 11:46:22 crc kubenswrapper[4678]: I1124 11:46:22.091322 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-5vx7g"] Nov 24 11:46:23 crc kubenswrapper[4678]: I1124 11:46:23.915175 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec3b0873-a45a-4311-a6e9-8f0dc4d031b8" path="/var/lib/kubelet/pods/ec3b0873-a45a-4311-a6e9-8f0dc4d031b8/volumes" Nov 24 11:46:24 crc kubenswrapper[4678]: I1124 11:46:24.058266 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3978-account-create-gsvfr"] Nov 24 11:46:24 crc kubenswrapper[4678]: I1124 11:46:24.073440 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-8dce-account-create-k6d7z"] Nov 24 11:46:24 crc kubenswrapper[4678]: I1124 11:46:24.084721 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-8dce-account-create-k6d7z"] Nov 24 11:46:24 crc kubenswrapper[4678]: I1124 11:46:24.096404 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3978-account-create-gsvfr"] Nov 24 11:46:25 crc kubenswrapper[4678]: I1124 11:46:25.044494 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-c45f-account-create-pmk9q"] Nov 24 11:46:25 crc kubenswrapper[4678]: I1124 11:46:25.061368 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-bwdh4"] Nov 24 11:46:25 crc kubenswrapper[4678]: I1124 11:46:25.076567 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-c45f-account-create-pmk9q"] Nov 24 11:46:25 crc kubenswrapper[4678]: I1124 11:46:25.090393 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-bwdh4"] Nov 24 11:46:25 crc kubenswrapper[4678]: I1124 11:46:25.104567 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-g8hdr"] Nov 24 11:46:25 crc kubenswrapper[4678]: I1124 11:46:25.118934 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-g8hdr"] Nov 24 11:46:25 crc kubenswrapper[4678]: I1124 11:46:25.961505 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="700ed725-dec9-4b2c-873c-82075bbcd721" path="/var/lib/kubelet/pods/700ed725-dec9-4b2c-873c-82075bbcd721/volumes" Nov 24 11:46:25 crc kubenswrapper[4678]: I1124 11:46:25.970421 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91ab28a9-6ee0-4a76-ae5f-c4b27521125d" path="/var/lib/kubelet/pods/91ab28a9-6ee0-4a76-ae5f-c4b27521125d/volumes" Nov 24 11:46:25 crc kubenswrapper[4678]: I1124 11:46:25.980232 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93d8b1fc-83cc-4133-a390-e8d87ee4375b" path="/var/lib/kubelet/pods/93d8b1fc-83cc-4133-a390-e8d87ee4375b/volumes" Nov 24 11:46:25 crc kubenswrapper[4678]: I1124 11:46:25.987132 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e68cf86b-0798-4155-ba4c-dfc5ef2698cc" path="/var/lib/kubelet/pods/e68cf86b-0798-4155-ba4c-dfc5ef2698cc/volumes" Nov 24 11:46:25 crc kubenswrapper[4678]: I1124 11:46:25.990626 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eedffe7d-12cf-4276-b084-e121838c576d" path="/var/lib/kubelet/pods/eedffe7d-12cf-4276-b084-e121838c576d/volumes" Nov 24 11:46:26 crc kubenswrapper[4678]: I1124 11:46:26.046039 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1c56-account-create-jk4zk"] Nov 24 11:46:26 crc kubenswrapper[4678]: I1124 11:46:26.061356 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-8qp75"] Nov 24 11:46:26 crc kubenswrapper[4678]: I1124 11:46:26.072079 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1c56-account-create-jk4zk"] Nov 24 11:46:26 crc kubenswrapper[4678]: I1124 11:46:26.083003 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-8qp75"] Nov 24 11:46:27 crc kubenswrapper[4678]: I1124 11:46:27.910054 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829c1e90-ba5e-4c4f-9b18-0bd8144c1e92" path="/var/lib/kubelet/pods/829c1e90-ba5e-4c4f-9b18-0bd8144c1e92/volumes" Nov 24 11:46:27 crc kubenswrapper[4678]: I1124 11:46:27.914504 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb84b0f1-427a-4440-bfcc-cc3d7e933496" path="/var/lib/kubelet/pods/bb84b0f1-427a-4440-bfcc-cc3d7e933496/volumes" Nov 24 11:46:32 crc kubenswrapper[4678]: I1124 11:46:32.047539 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-c5fe-account-create-759dd"] Nov 24 11:46:32 crc kubenswrapper[4678]: I1124 11:46:32.066270 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fzs56"] Nov 24 11:46:32 crc kubenswrapper[4678]: I1124 11:46:32.080912 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-c5fe-account-create-759dd"] Nov 24 11:46:32 crc kubenswrapper[4678]: I1124 11:46:32.092519 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fzs56"] Nov 24 11:46:33 crc kubenswrapper[4678]: I1124 11:46:33.923586 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07773117-0d6a-4c24-a8d6-4f2f27f280d9" path="/var/lib/kubelet/pods/07773117-0d6a-4c24-a8d6-4f2f27f280d9/volumes" Nov 24 11:46:33 crc kubenswrapper[4678]: I1124 11:46:33.928718 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7" path="/var/lib/kubelet/pods/9c2cc500-b88a-441b-bd7b-3f3bed5dd1b7/volumes" Nov 24 11:46:34 crc kubenswrapper[4678]: I1124 11:46:34.340238 4678 scope.go:117] "RemoveContainer" containerID="db79d0acb9b08906621f939f8f093d9239e80208b395b649a5ea5f9e723d7485" Nov 24 11:46:34 crc kubenswrapper[4678]: I1124 11:46:34.378165 4678 scope.go:117] "RemoveContainer" containerID="88fb9c62045b4dc362edb6f6dd927b852012457878eace6daad33a933e608932" Nov 24 11:46:34 crc kubenswrapper[4678]: I1124 11:46:34.451239 4678 scope.go:117] "RemoveContainer" containerID="2f58d8570f99884b73bf01b741429fa315cea40f4309d3c345c021362ad654e6" Nov 24 11:46:34 crc kubenswrapper[4678]: I1124 11:46:34.524628 4678 scope.go:117] "RemoveContainer" containerID="dd2edd04b534fd5e1e7bf5339ebb4ba8c9ead3c3d07fe21966654934f83c6bb7" Nov 24 11:46:34 crc kubenswrapper[4678]: I1124 11:46:34.581395 4678 scope.go:117] "RemoveContainer" containerID="018cfb5aa100853e1ee9f324cf4a2b16756725fe0353c7a3c29fd43cba415000" Nov 24 11:46:34 crc kubenswrapper[4678]: I1124 11:46:34.662997 4678 scope.go:117] "RemoveContainer" containerID="96c531c4a57a2de56b3b6fa821d3cc8e221a68f6ff85ec020fa9f8c7fb238f5a" Nov 24 11:46:34 crc kubenswrapper[4678]: I1124 11:46:34.704837 4678 scope.go:117] "RemoveContainer" containerID="88daf149dc9bea59ebd135dc4a493f8b227343e982fdd9f984839ab671e5ada1" Nov 24 11:46:34 crc kubenswrapper[4678]: I1124 11:46:34.732304 4678 scope.go:117] "RemoveContainer" containerID="d28f75cf0874e3d9ad4b9406bc6176a70c3f1de74f08e638aa0b0002497e738f" Nov 24 11:46:34 crc kubenswrapper[4678]: I1124 11:46:34.757036 4678 scope.go:117] "RemoveContainer" containerID="40e348254e60236776df3d800722a35a5faea143fa15615ee31792077462433e" Nov 24 11:46:34 crc kubenswrapper[4678]: I1124 11:46:34.786683 4678 scope.go:117] "RemoveContainer" containerID="c51c0a176d1eb606671a8575b2ee81fae466e997c3468208d9eb7a197774d59c" Nov 24 11:46:34 crc kubenswrapper[4678]: I1124 11:46:34.815896 4678 scope.go:117] "RemoveContainer" containerID="68ce9dba487c9232f62a604f038bf8bb17c7d6e223a7161c08731530a8f86eab" Nov 24 11:46:34 crc kubenswrapper[4678]: I1124 11:46:34.835745 4678 scope.go:117] "RemoveContainer" containerID="2c3ad5e32603b8f2c80538ad98ce689ae2bb486d85cd5324eb31a6db24139c4e" Nov 24 11:46:34 crc kubenswrapper[4678]: I1124 11:46:34.860573 4678 scope.go:117] "RemoveContainer" containerID="09c68769805841926934d3de1ff3eb1c0c3bb4eb0caefc61d6495abff8f0c1af" Nov 24 11:46:34 crc kubenswrapper[4678]: I1124 11:46:34.894435 4678 scope.go:117] "RemoveContainer" containerID="0d6f607cbc91f48c23bf550b187b2168ee391ed884fbb797369b278d5eef0ca8" Nov 24 11:46:34 crc kubenswrapper[4678]: I1124 11:46:34.923699 4678 scope.go:117] "RemoveContainer" containerID="3e33b21d65efdd6fdf91f72f702d551dc57224d9d07f38463dd019d1af4aca53" Nov 24 11:46:34 crc kubenswrapper[4678]: I1124 11:46:34.947653 4678 scope.go:117] "RemoveContainer" containerID="c922df9ccae76f28dea5e2dec204385b587b6d6fd167f7cc3fb58d4ae02e8e7b" Nov 24 11:46:42 crc kubenswrapper[4678]: I1124 11:46:42.051390 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-fpvbh"] Nov 24 11:46:42 crc kubenswrapper[4678]: I1124 11:46:42.079743 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-6wt6l"] Nov 24 11:46:42 crc kubenswrapper[4678]: I1124 11:46:42.095028 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6472-account-create-dmhpl"] Nov 24 11:46:42 crc kubenswrapper[4678]: I1124 11:46:42.108392 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-p5q8z"] Nov 24 11:46:42 crc kubenswrapper[4678]: I1124 11:46:42.120927 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-fpvbh"] Nov 24 11:46:42 crc kubenswrapper[4678]: I1124 11:46:42.134533 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dec4-account-create-q8wzh"] Nov 24 11:46:42 crc kubenswrapper[4678]: I1124 11:46:42.146488 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-dec4-account-create-q8wzh"] Nov 24 11:46:42 crc kubenswrapper[4678]: I1124 11:46:42.162176 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-6wt6l"] Nov 24 11:46:42 crc kubenswrapper[4678]: I1124 11:46:42.176645 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-6472-account-create-dmhpl"] Nov 24 11:46:42 crc kubenswrapper[4678]: I1124 11:46:42.189888 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-p5q8z"] Nov 24 11:46:43 crc kubenswrapper[4678]: I1124 11:46:43.917201 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14aebdf2-73dd-4904-a5bb-01dbe513298e" path="/var/lib/kubelet/pods/14aebdf2-73dd-4904-a5bb-01dbe513298e/volumes" Nov 24 11:46:43 crc kubenswrapper[4678]: I1124 11:46:43.919251 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1811771b-0c1b-4767-b4e2-ec8b52d12f18" path="/var/lib/kubelet/pods/1811771b-0c1b-4767-b4e2-ec8b52d12f18/volumes" Nov 24 11:46:43 crc kubenswrapper[4678]: I1124 11:46:43.924908 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b637f29-368e-458f-93dd-77f478100f0b" path="/var/lib/kubelet/pods/2b637f29-368e-458f-93dd-77f478100f0b/volumes" Nov 24 11:46:43 crc kubenswrapper[4678]: I1124 11:46:43.927480 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ccdb39d-cd19-45a6-aa4d-bbee44622101" path="/var/lib/kubelet/pods/6ccdb39d-cd19-45a6-aa4d-bbee44622101/volumes" Nov 24 11:46:43 crc kubenswrapper[4678]: I1124 11:46:43.931470 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb5591ea-c50b-46c1-8ed3-e2062967d0f1" path="/var/lib/kubelet/pods/cb5591ea-c50b-46c1-8ed3-e2062967d0f1/volumes" Nov 24 11:46:45 crc kubenswrapper[4678]: I1124 11:46:45.067822 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-b0fb-account-create-w4x74"] Nov 24 11:46:45 crc kubenswrapper[4678]: I1124 11:46:45.077474 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-750d-account-create-w4fsn"] Nov 24 11:46:45 crc kubenswrapper[4678]: I1124 11:46:45.088538 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-q26st"] Nov 24 11:46:45 crc kubenswrapper[4678]: I1124 11:46:45.100155 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-750d-account-create-w4fsn"] Nov 24 11:46:45 crc kubenswrapper[4678]: I1124 11:46:45.118147 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-q26st"] Nov 24 11:46:45 crc kubenswrapper[4678]: I1124 11:46:45.139067 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-b0fb-account-create-w4x74"] Nov 24 11:46:45 crc kubenswrapper[4678]: I1124 11:46:45.909559 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cfd80a7-5fb2-4a38-9a9b-839510edff06" path="/var/lib/kubelet/pods/3cfd80a7-5fb2-4a38-9a9b-839510edff06/volumes" Nov 24 11:46:45 crc kubenswrapper[4678]: I1124 11:46:45.912042 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75a467ed-5cfa-44da-9e07-7902433ef5a0" path="/var/lib/kubelet/pods/75a467ed-5cfa-44da-9e07-7902433ef5a0/volumes" Nov 24 11:46:45 crc kubenswrapper[4678]: I1124 11:46:45.912652 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9473500-25d5-4b49-a95a-c4b1de4ac854" path="/var/lib/kubelet/pods/c9473500-25d5-4b49-a95a-c4b1de4ac854/volumes" Nov 24 11:46:59 crc kubenswrapper[4678]: I1124 11:46:59.083575 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4dr4g"] Nov 24 11:46:59 crc kubenswrapper[4678]: I1124 11:46:59.099945 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4dr4g"] Nov 24 11:46:59 crc kubenswrapper[4678]: I1124 11:46:59.926192 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef61d04e-97aa-4f5e-9fbd-f6abf2258b87" path="/var/lib/kubelet/pods/ef61d04e-97aa-4f5e-9fbd-f6abf2258b87/volumes" Nov 24 11:47:13 crc kubenswrapper[4678]: I1124 11:47:13.046604 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-ch9vg"] Nov 24 11:47:13 crc kubenswrapper[4678]: I1124 11:47:13.073249 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-ch9vg"] Nov 24 11:47:13 crc kubenswrapper[4678]: I1124 11:47:13.910431 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c6005a5-db1b-49b6-87ce-c507e10a6d21" path="/var/lib/kubelet/pods/3c6005a5-db1b-49b6-87ce-c507e10a6d21/volumes" Nov 24 11:47:29 crc kubenswrapper[4678]: I1124 11:47:29.047517 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-gwwg7"] Nov 24 11:47:29 crc kubenswrapper[4678]: I1124 11:47:29.058085 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-gwwg7"] Nov 24 11:47:29 crc kubenswrapper[4678]: I1124 11:47:29.915929 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="471c5038-c8ee-4819-bb5d-93c509389555" path="/var/lib/kubelet/pods/471c5038-c8ee-4819-bb5d-93c509389555/volumes" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.255906 4678 scope.go:117] "RemoveContainer" containerID="ecdacfb168c696319b3f83b8abf5157ddc7034f2fa809f7b60d0b58f8a39fbec" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.292340 4678 scope.go:117] "RemoveContainer" containerID="7c68d0e13125e6de9f366d7a055c01ab2c02dd4593257575bc8a1bb9a12733c7" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.358874 4678 scope.go:117] "RemoveContainer" containerID="acf2e9eb1542e13e2a26b0e3eac8b39ba506b86eb9da7dcb7502cb8a50b56f16" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.401312 4678 scope.go:117] "RemoveContainer" containerID="8c1f058f23d20600e023cc13524dfec570e01ffae5b8cdcf98c054b40705eace" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.457982 4678 scope.go:117] "RemoveContainer" containerID="3b91ff2ca751c03243723081ab6076402a78d1c9de6e70c396865e5e0b2b1d92" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.501739 4678 scope.go:117] "RemoveContainer" containerID="f0e353e36f389b9baf5c230fa637960757b888b4752a24d4d9efa1a09723b176" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.544108 4678 scope.go:117] "RemoveContainer" containerID="250d204d0d5b06b5d2a1993bf32182b03ea3b115638f42af2574d90d657371d7" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.596329 4678 scope.go:117] "RemoveContainer" containerID="d7fa12841709236f29a8bfbcce4110b9c20e73b8c19d8e49f010f101dbc02386" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.638682 4678 scope.go:117] "RemoveContainer" containerID="31cd422052c78c53e4a0c7c29cc3f9e1aa12bad0cc4b6036639b40662d670412" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.660636 4678 scope.go:117] "RemoveContainer" containerID="2b94864f00a8fe20a194b3abcabaf4f2d1511aa9071b11f16f78c1d89886ab9e" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.688867 4678 scope.go:117] "RemoveContainer" containerID="9e487f59561fa870b6b16aefba3eb5a2c6fe89266d2547405673166244a1edda" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.711641 4678 scope.go:117] "RemoveContainer" containerID="a1ed7ed49e85e68ad5e031f4bee6ea6971c2f51f5ab6c7a10a335daa26c5f2d8" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.741388 4678 scope.go:117] "RemoveContainer" containerID="9b4e0692ae8403cfbc2cd50df22fd5121d72960e267d6fd59c086085a8776297" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.781095 4678 scope.go:117] "RemoveContainer" containerID="4f902861562f9d0d1dd94162eea5081f397c0f5c9593cbee0475a01a30978c98" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.802389 4678 scope.go:117] "RemoveContainer" containerID="449aac29913811d007ae9e033dd682e63a3fa73494072d44ec10a60c67cafc59" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.822199 4678 scope.go:117] "RemoveContainer" containerID="38d25c4465104f0c39efc2438beef8a67615df4719e4b90ee704f716cbb70f74" Nov 24 11:47:35 crc kubenswrapper[4678]: I1124 11:47:35.846320 4678 scope.go:117] "RemoveContainer" containerID="8794fb3bae779b82e15f66fe9acb61f8e75ef61f136a9c64bfd670d9407e521c" Nov 24 11:47:40 crc kubenswrapper[4678]: I1124 11:47:40.056817 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-4qbq8"] Nov 24 11:47:40 crc kubenswrapper[4678]: I1124 11:47:40.078446 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-bcswl"] Nov 24 11:47:40 crc kubenswrapper[4678]: I1124 11:47:40.092550 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-x5lx5"] Nov 24 11:47:40 crc kubenswrapper[4678]: I1124 11:47:40.104538 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-x5lx5"] Nov 24 11:47:40 crc kubenswrapper[4678]: I1124 11:47:40.114150 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-bcswl"] Nov 24 11:47:40 crc kubenswrapper[4678]: I1124 11:47:40.126496 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-4qbq8"] Nov 24 11:47:41 crc kubenswrapper[4678]: I1124 11:47:41.916870 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195eda15-ecc1-4041-b42e-ffe751e686af" path="/var/lib/kubelet/pods/195eda15-ecc1-4041-b42e-ffe751e686af/volumes" Nov 24 11:47:41 crc kubenswrapper[4678]: I1124 11:47:41.921151 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bebde18-e99d-49a3-bb56-5f0de9049363" path="/var/lib/kubelet/pods/4bebde18-e99d-49a3-bb56-5f0de9049363/volumes" Nov 24 11:47:41 crc kubenswrapper[4678]: I1124 11:47:41.922811 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82d67de7-2cd2-480b-b8f9-1c73bff16add" path="/var/lib/kubelet/pods/82d67de7-2cd2-480b-b8f9-1c73bff16add/volumes" Nov 24 11:48:01 crc kubenswrapper[4678]: I1124 11:48:01.046353 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qx8wj"] Nov 24 11:48:01 crc kubenswrapper[4678]: I1124 11:48:01.060093 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qx8wj"] Nov 24 11:48:01 crc kubenswrapper[4678]: I1124 11:48:01.924227 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bf1a661-b2a3-458a-b504-2cac3277bd5d" path="/var/lib/kubelet/pods/7bf1a661-b2a3-458a-b504-2cac3277bd5d/volumes" Nov 24 11:48:14 crc kubenswrapper[4678]: I1124 11:48:14.829556 4678 generic.go:334] "Generic (PLEG): container finished" podID="e3962a1c-012b-4c17-85d3-bf3f2f5b6147" containerID="31992a252a8d16108f191841f27b3f53834ca1ccfbb62f4b6a4182b058f913bf" exitCode=0 Nov 24 11:48:14 crc kubenswrapper[4678]: I1124 11:48:14.829620 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" event={"ID":"e3962a1c-012b-4c17-85d3-bf3f2f5b6147","Type":"ContainerDied","Data":"31992a252a8d16108f191841f27b3f53834ca1ccfbb62f4b6a4182b058f913bf"} Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.334237 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.460312 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-ssh-key\") pod \"e3962a1c-012b-4c17-85d3-bf3f2f5b6147\" (UID: \"e3962a1c-012b-4c17-85d3-bf3f2f5b6147\") " Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.460416 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-inventory\") pod \"e3962a1c-012b-4c17-85d3-bf3f2f5b6147\" (UID: \"e3962a1c-012b-4c17-85d3-bf3f2f5b6147\") " Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.460507 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgqc9\" (UniqueName: \"kubernetes.io/projected/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-kube-api-access-qgqc9\") pod \"e3962a1c-012b-4c17-85d3-bf3f2f5b6147\" (UID: \"e3962a1c-012b-4c17-85d3-bf3f2f5b6147\") " Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.466232 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-kube-api-access-qgqc9" (OuterVolumeSpecName: "kube-api-access-qgqc9") pod "e3962a1c-012b-4c17-85d3-bf3f2f5b6147" (UID: "e3962a1c-012b-4c17-85d3-bf3f2f5b6147"). InnerVolumeSpecName "kube-api-access-qgqc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.493838 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-inventory" (OuterVolumeSpecName: "inventory") pod "e3962a1c-012b-4c17-85d3-bf3f2f5b6147" (UID: "e3962a1c-012b-4c17-85d3-bf3f2f5b6147"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.493869 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e3962a1c-012b-4c17-85d3-bf3f2f5b6147" (UID: "e3962a1c-012b-4c17-85d3-bf3f2f5b6147"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.563568 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgqc9\" (UniqueName: \"kubernetes.io/projected/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-kube-api-access-qgqc9\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.563600 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.563610 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3962a1c-012b-4c17-85d3-bf3f2f5b6147-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.859697 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" event={"ID":"e3962a1c-012b-4c17-85d3-bf3f2f5b6147","Type":"ContainerDied","Data":"5fb38a67a4f204c0dfd786babb40a159629615f844be60939fe87dc2f13c9fde"} Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.859767 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fb38a67a4f204c0dfd786babb40a159629615f844be60939fe87dc2f13c9fde" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.859859 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-54bsj" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.936743 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p"] Nov 24 11:48:16 crc kubenswrapper[4678]: E1124 11:48:16.937332 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3962a1c-012b-4c17-85d3-bf3f2f5b6147" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.937350 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3962a1c-012b-4c17-85d3-bf3f2f5b6147" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.942256 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3962a1c-012b-4c17-85d3-bf3f2f5b6147" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.943123 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.946000 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.946168 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.946854 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.954372 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 11:48:16 crc kubenswrapper[4678]: I1124 11:48:16.956214 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p"] Nov 24 11:48:17 crc kubenswrapper[4678]: I1124 11:48:17.078488 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f93cb91-ae3f-42ef-844b-70d428271ee1-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p\" (UID: \"2f93cb91-ae3f-42ef-844b-70d428271ee1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" Nov 24 11:48:17 crc kubenswrapper[4678]: I1124 11:48:17.079436 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl2l7\" (UniqueName: \"kubernetes.io/projected/2f93cb91-ae3f-42ef-844b-70d428271ee1-kube-api-access-hl2l7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p\" (UID: \"2f93cb91-ae3f-42ef-844b-70d428271ee1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" Nov 24 11:48:17 crc kubenswrapper[4678]: I1124 11:48:17.079609 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f93cb91-ae3f-42ef-844b-70d428271ee1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p\" (UID: \"2f93cb91-ae3f-42ef-844b-70d428271ee1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" Nov 24 11:48:17 crc kubenswrapper[4678]: I1124 11:48:17.182515 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f93cb91-ae3f-42ef-844b-70d428271ee1-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p\" (UID: \"2f93cb91-ae3f-42ef-844b-70d428271ee1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" Nov 24 11:48:17 crc kubenswrapper[4678]: I1124 11:48:17.183046 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl2l7\" (UniqueName: \"kubernetes.io/projected/2f93cb91-ae3f-42ef-844b-70d428271ee1-kube-api-access-hl2l7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p\" (UID: \"2f93cb91-ae3f-42ef-844b-70d428271ee1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" Nov 24 11:48:17 crc kubenswrapper[4678]: I1124 11:48:17.183639 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f93cb91-ae3f-42ef-844b-70d428271ee1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p\" (UID: \"2f93cb91-ae3f-42ef-844b-70d428271ee1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" Nov 24 11:48:17 crc kubenswrapper[4678]: I1124 11:48:17.187148 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f93cb91-ae3f-42ef-844b-70d428271ee1-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p\" (UID: \"2f93cb91-ae3f-42ef-844b-70d428271ee1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" Nov 24 11:48:17 crc kubenswrapper[4678]: I1124 11:48:17.190046 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f93cb91-ae3f-42ef-844b-70d428271ee1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p\" (UID: \"2f93cb91-ae3f-42ef-844b-70d428271ee1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" Nov 24 11:48:17 crc kubenswrapper[4678]: I1124 11:48:17.200989 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl2l7\" (UniqueName: \"kubernetes.io/projected/2f93cb91-ae3f-42ef-844b-70d428271ee1-kube-api-access-hl2l7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p\" (UID: \"2f93cb91-ae3f-42ef-844b-70d428271ee1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" Nov 24 11:48:17 crc kubenswrapper[4678]: I1124 11:48:17.269823 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" Nov 24 11:48:18 crc kubenswrapper[4678]: I1124 11:48:18.063078 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p"] Nov 24 11:48:18 crc kubenswrapper[4678]: I1124 11:48:18.097788 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:48:18 crc kubenswrapper[4678]: I1124 11:48:18.883064 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" event={"ID":"2f93cb91-ae3f-42ef-844b-70d428271ee1","Type":"ContainerStarted","Data":"5a420d7a85144f8fb50a48c00321a8a50805a187486df0fc44b7f4afe37e2e3a"} Nov 24 11:48:18 crc kubenswrapper[4678]: I1124 11:48:18.883417 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" event={"ID":"2f93cb91-ae3f-42ef-844b-70d428271ee1","Type":"ContainerStarted","Data":"73a161a45597bd728723de4db6c3ed23e116b27177b7220f9a9db7920053338a"} Nov 24 11:48:18 crc kubenswrapper[4678]: I1124 11:48:18.903010 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" podStartSLOduration=2.440586949 podStartE2EDuration="2.902986624s" podCreationTimestamp="2025-11-24 11:48:16 +0000 UTC" firstStartedPulling="2025-11-24 11:48:18.097578541 +0000 UTC m=+1909.028638180" lastFinishedPulling="2025-11-24 11:48:18.559978216 +0000 UTC m=+1909.491037855" observedRunningTime="2025-11-24 11:48:18.899166641 +0000 UTC m=+1909.830226290" watchObservedRunningTime="2025-11-24 11:48:18.902986624 +0000 UTC m=+1909.834046263" Nov 24 11:48:30 crc kubenswrapper[4678]: I1124 11:48:30.296737 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:48:30 crc kubenswrapper[4678]: I1124 11:48:30.297269 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:48:36 crc kubenswrapper[4678]: I1124 11:48:36.112663 4678 scope.go:117] "RemoveContainer" containerID="43dee8dd2a553aeca802b33092d914631dc3a26f4437d9fd32976b28a51fd95b" Nov 24 11:48:36 crc kubenswrapper[4678]: I1124 11:48:36.172752 4678 scope.go:117] "RemoveContainer" containerID="84292b8ff95df849599cdd6b81c24ffb6a598d8bd67407b695d8c64170cb7699" Nov 24 11:48:36 crc kubenswrapper[4678]: I1124 11:48:36.205213 4678 scope.go:117] "RemoveContainer" containerID="7fed17068414762afc89bebe3b204fd97ca53935dd335d0eb07056a90449e648" Nov 24 11:48:36 crc kubenswrapper[4678]: I1124 11:48:36.268910 4678 scope.go:117] "RemoveContainer" containerID="fd801e7934b8b5b53e0087782f79fb2cb2fd75161e513e16d04c1cd04384df99" Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.059849 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-02a8-account-create-vkqwp"] Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.070933 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-02a8-account-create-vkqwp"] Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.084862 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-vfl9l"] Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.106331 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-rn787"] Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.117588 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-abe0-account-create-7rbmj"] Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.128752 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-244b-account-create-6jsxj"] Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.138249 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-fq7ll"] Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.147482 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-244b-account-create-6jsxj"] Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.156975 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-rn787"] Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.166532 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-abe0-account-create-7rbmj"] Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.175694 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-vfl9l"] Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.184041 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-fq7ll"] Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.915785 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f2aa84a-6c99-44d4-b3e4-11756080a16a" path="/var/lib/kubelet/pods/4f2aa84a-6c99-44d4-b3e4-11756080a16a/volumes" Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.923964 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5223630b-272a-434b-83df-ef3915f58880" path="/var/lib/kubelet/pods/5223630b-272a-434b-83df-ef3915f58880/volumes" Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.928295 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c34bab2-8d47-43e1-b367-8dd9b5c13c47" path="/var/lib/kubelet/pods/6c34bab2-8d47-43e1-b367-8dd9b5c13c47/volumes" Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.930858 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="923a45a5-bc05-4472-b647-b280bec7618b" path="/var/lib/kubelet/pods/923a45a5-bc05-4472-b647-b280bec7618b/volumes" Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.931717 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e34ca05d-7673-435b-a6e6-0d775765472c" path="/var/lib/kubelet/pods/e34ca05d-7673-435b-a6e6-0d775765472c/volumes" Nov 24 11:48:37 crc kubenswrapper[4678]: I1124 11:48:37.935882 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc44c93e-8f06-48eb-a0a6-36a04e942702" path="/var/lib/kubelet/pods/fc44c93e-8f06-48eb-a0a6-36a04e942702/volumes" Nov 24 11:49:00 crc kubenswrapper[4678]: I1124 11:49:00.296820 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:49:00 crc kubenswrapper[4678]: I1124 11:49:00.297330 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:49:15 crc kubenswrapper[4678]: I1124 11:49:15.057922 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4s8zw"] Nov 24 11:49:15 crc kubenswrapper[4678]: I1124 11:49:15.066934 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4s8zw"] Nov 24 11:49:15 crc kubenswrapper[4678]: I1124 11:49:15.913125 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33eb1a4f-f16f-474b-bb69-a3d6d87df9f6" path="/var/lib/kubelet/pods/33eb1a4f-f16f-474b-bb69-a3d6d87df9f6/volumes" Nov 24 11:49:26 crc kubenswrapper[4678]: I1124 11:49:26.694943 4678 generic.go:334] "Generic (PLEG): container finished" podID="2f93cb91-ae3f-42ef-844b-70d428271ee1" containerID="5a420d7a85144f8fb50a48c00321a8a50805a187486df0fc44b7f4afe37e2e3a" exitCode=0 Nov 24 11:49:26 crc kubenswrapper[4678]: I1124 11:49:26.695044 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" event={"ID":"2f93cb91-ae3f-42ef-844b-70d428271ee1","Type":"ContainerDied","Data":"5a420d7a85144f8fb50a48c00321a8a50805a187486df0fc44b7f4afe37e2e3a"} Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.246431 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.288622 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f93cb91-ae3f-42ef-844b-70d428271ee1-inventory\") pod \"2f93cb91-ae3f-42ef-844b-70d428271ee1\" (UID: \"2f93cb91-ae3f-42ef-844b-70d428271ee1\") " Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.288721 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl2l7\" (UniqueName: \"kubernetes.io/projected/2f93cb91-ae3f-42ef-844b-70d428271ee1-kube-api-access-hl2l7\") pod \"2f93cb91-ae3f-42ef-844b-70d428271ee1\" (UID: \"2f93cb91-ae3f-42ef-844b-70d428271ee1\") " Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.288857 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f93cb91-ae3f-42ef-844b-70d428271ee1-ssh-key\") pod \"2f93cb91-ae3f-42ef-844b-70d428271ee1\" (UID: \"2f93cb91-ae3f-42ef-844b-70d428271ee1\") " Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.300056 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f93cb91-ae3f-42ef-844b-70d428271ee1-kube-api-access-hl2l7" (OuterVolumeSpecName: "kube-api-access-hl2l7") pod "2f93cb91-ae3f-42ef-844b-70d428271ee1" (UID: "2f93cb91-ae3f-42ef-844b-70d428271ee1"). InnerVolumeSpecName "kube-api-access-hl2l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.322380 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f93cb91-ae3f-42ef-844b-70d428271ee1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2f93cb91-ae3f-42ef-844b-70d428271ee1" (UID: "2f93cb91-ae3f-42ef-844b-70d428271ee1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.326428 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f93cb91-ae3f-42ef-844b-70d428271ee1-inventory" (OuterVolumeSpecName: "inventory") pod "2f93cb91-ae3f-42ef-844b-70d428271ee1" (UID: "2f93cb91-ae3f-42ef-844b-70d428271ee1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.401887 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f93cb91-ae3f-42ef-844b-70d428271ee1-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.401942 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl2l7\" (UniqueName: \"kubernetes.io/projected/2f93cb91-ae3f-42ef-844b-70d428271ee1-kube-api-access-hl2l7\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.401954 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f93cb91-ae3f-42ef-844b-70d428271ee1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.720243 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" event={"ID":"2f93cb91-ae3f-42ef-844b-70d428271ee1","Type":"ContainerDied","Data":"73a161a45597bd728723de4db6c3ed23e116b27177b7220f9a9db7920053338a"} Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.720333 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73a161a45597bd728723de4db6c3ed23e116b27177b7220f9a9db7920053338a" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.720282 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.818910 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j"] Nov 24 11:49:28 crc kubenswrapper[4678]: E1124 11:49:28.819522 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f93cb91-ae3f-42ef-844b-70d428271ee1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.819540 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f93cb91-ae3f-42ef-844b-70d428271ee1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.819917 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f93cb91-ae3f-42ef-844b-70d428271ee1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.820818 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.823228 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.823417 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.823556 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.823711 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.828717 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j"] Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.917444 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e8e9e91-5959-4640-8cea-d21f383c0c54-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j\" (UID: \"2e8e9e91-5959-4640-8cea-d21f383c0c54\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.917524 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2ckv\" (UniqueName: \"kubernetes.io/projected/2e8e9e91-5959-4640-8cea-d21f383c0c54-kube-api-access-s2ckv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j\" (UID: \"2e8e9e91-5959-4640-8cea-d21f383c0c54\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" Nov 24 11:49:28 crc kubenswrapper[4678]: I1124 11:49:28.917654 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e8e9e91-5959-4640-8cea-d21f383c0c54-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j\" (UID: \"2e8e9e91-5959-4640-8cea-d21f383c0c54\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" Nov 24 11:49:29 crc kubenswrapper[4678]: I1124 11:49:29.020449 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e8e9e91-5959-4640-8cea-d21f383c0c54-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j\" (UID: \"2e8e9e91-5959-4640-8cea-d21f383c0c54\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" Nov 24 11:49:29 crc kubenswrapper[4678]: I1124 11:49:29.020511 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2ckv\" (UniqueName: \"kubernetes.io/projected/2e8e9e91-5959-4640-8cea-d21f383c0c54-kube-api-access-s2ckv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j\" (UID: \"2e8e9e91-5959-4640-8cea-d21f383c0c54\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" Nov 24 11:49:29 crc kubenswrapper[4678]: I1124 11:49:29.020587 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e8e9e91-5959-4640-8cea-d21f383c0c54-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j\" (UID: \"2e8e9e91-5959-4640-8cea-d21f383c0c54\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" Nov 24 11:49:29 crc kubenswrapper[4678]: I1124 11:49:29.025706 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e8e9e91-5959-4640-8cea-d21f383c0c54-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j\" (UID: \"2e8e9e91-5959-4640-8cea-d21f383c0c54\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" Nov 24 11:49:29 crc kubenswrapper[4678]: I1124 11:49:29.026138 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e8e9e91-5959-4640-8cea-d21f383c0c54-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j\" (UID: \"2e8e9e91-5959-4640-8cea-d21f383c0c54\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" Nov 24 11:49:29 crc kubenswrapper[4678]: I1124 11:49:29.043068 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2ckv\" (UniqueName: \"kubernetes.io/projected/2e8e9e91-5959-4640-8cea-d21f383c0c54-kube-api-access-s2ckv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j\" (UID: \"2e8e9e91-5959-4640-8cea-d21f383c0c54\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" Nov 24 11:49:29 crc kubenswrapper[4678]: I1124 11:49:29.136643 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" Nov 24 11:49:29 crc kubenswrapper[4678]: I1124 11:49:29.871626 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j"] Nov 24 11:49:29 crc kubenswrapper[4678]: W1124 11:49:29.873447 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e8e9e91_5959_4640_8cea_d21f383c0c54.slice/crio-f30409e0c4e26c55386226c427c38956423423c18384a5999ed50580c325b33d WatchSource:0}: Error finding container f30409e0c4e26c55386226c427c38956423423c18384a5999ed50580c325b33d: Status 404 returned error can't find the container with id f30409e0c4e26c55386226c427c38956423423c18384a5999ed50580c325b33d Nov 24 11:49:30 crc kubenswrapper[4678]: I1124 11:49:30.297025 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:49:30 crc kubenswrapper[4678]: I1124 11:49:30.297086 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:49:30 crc kubenswrapper[4678]: I1124 11:49:30.297135 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:49:30 crc kubenswrapper[4678]: I1124 11:49:30.298175 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07f7b4bf38854f595d8be8c0fa05f91ad02239dc235ff30184b0ce433099dc00"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:49:30 crc kubenswrapper[4678]: I1124 11:49:30.298249 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://07f7b4bf38854f595d8be8c0fa05f91ad02239dc235ff30184b0ce433099dc00" gracePeriod=600 Nov 24 11:49:30 crc kubenswrapper[4678]: I1124 11:49:30.382223 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:49:30 crc kubenswrapper[4678]: I1124 11:49:30.757866 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="07f7b4bf38854f595d8be8c0fa05f91ad02239dc235ff30184b0ce433099dc00" exitCode=0 Nov 24 11:49:30 crc kubenswrapper[4678]: I1124 11:49:30.758185 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"07f7b4bf38854f595d8be8c0fa05f91ad02239dc235ff30184b0ce433099dc00"} Nov 24 11:49:30 crc kubenswrapper[4678]: I1124 11:49:30.758229 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7"} Nov 24 11:49:30 crc kubenswrapper[4678]: I1124 11:49:30.758261 4678 scope.go:117] "RemoveContainer" containerID="d0bbf93b655f2a61c097bd3af34f2d8b30c979413103dbc098abbab250b16363" Nov 24 11:49:30 crc kubenswrapper[4678]: I1124 11:49:30.762333 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" event={"ID":"2e8e9e91-5959-4640-8cea-d21f383c0c54","Type":"ContainerStarted","Data":"73ca037c239a2cfdece441a4bc038d1b2a16755da60e39946bf38895fe8dcd84"} Nov 24 11:49:30 crc kubenswrapper[4678]: I1124 11:49:30.762367 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" event={"ID":"2e8e9e91-5959-4640-8cea-d21f383c0c54","Type":"ContainerStarted","Data":"f30409e0c4e26c55386226c427c38956423423c18384a5999ed50580c325b33d"} Nov 24 11:49:30 crc kubenswrapper[4678]: I1124 11:49:30.804854 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" podStartSLOduration=2.301402191 podStartE2EDuration="2.804834596s" podCreationTimestamp="2025-11-24 11:49:28 +0000 UTC" firstStartedPulling="2025-11-24 11:49:29.87579697 +0000 UTC m=+1980.806856609" lastFinishedPulling="2025-11-24 11:49:30.379229365 +0000 UTC m=+1981.310289014" observedRunningTime="2025-11-24 11:49:30.801909539 +0000 UTC m=+1981.732969188" watchObservedRunningTime="2025-11-24 11:49:30.804834596 +0000 UTC m=+1981.735894255" Nov 24 11:49:35 crc kubenswrapper[4678]: I1124 11:49:35.829264 4678 generic.go:334] "Generic (PLEG): container finished" podID="2e8e9e91-5959-4640-8cea-d21f383c0c54" containerID="73ca037c239a2cfdece441a4bc038d1b2a16755da60e39946bf38895fe8dcd84" exitCode=0 Nov 24 11:49:35 crc kubenswrapper[4678]: I1124 11:49:35.829381 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" event={"ID":"2e8e9e91-5959-4640-8cea-d21f383c0c54","Type":"ContainerDied","Data":"73ca037c239a2cfdece441a4bc038d1b2a16755da60e39946bf38895fe8dcd84"} Nov 24 11:49:36 crc kubenswrapper[4678]: I1124 11:49:36.439866 4678 scope.go:117] "RemoveContainer" containerID="85ac138c0a01934354e4f66fab67b49c448925c029f41568d4c013ca444f1398" Nov 24 11:49:36 crc kubenswrapper[4678]: I1124 11:49:36.479079 4678 scope.go:117] "RemoveContainer" containerID="d94c3a26a2b1f0f0e2cf372040f1bd2e2eeba39a52880e2f3a5f33fc2e9656c9" Nov 24 11:49:36 crc kubenswrapper[4678]: I1124 11:49:36.522584 4678 scope.go:117] "RemoveContainer" containerID="cd9bc4d5ad09d8e09bf66fb754b1f171c6113ad9c7cf61552f1c9c2d3dfa5132" Nov 24 11:49:36 crc kubenswrapper[4678]: I1124 11:49:36.596252 4678 scope.go:117] "RemoveContainer" containerID="c72e46814877bd7631836b7f6611cfdf6281ff5512e74b1c16a5d1f956ac0f00" Nov 24 11:49:36 crc kubenswrapper[4678]: I1124 11:49:36.635049 4678 scope.go:117] "RemoveContainer" containerID="a1f7f0825848dbfc1982da57355cc1324c2ad6611a4b2b9f8a3ef589d72c92ed" Nov 24 11:49:36 crc kubenswrapper[4678]: I1124 11:49:36.689373 4678 scope.go:117] "RemoveContainer" containerID="d16f80ed63ce8416a7c4129769046e29ceeb8fa909d6601c2e275d63a7ae7143" Nov 24 11:49:36 crc kubenswrapper[4678]: I1124 11:49:36.742946 4678 scope.go:117] "RemoveContainer" containerID="889857007c31d2f59ceb7da9ed01ac8cc91dbe1611cb1a00c4f1a4bf347c07bc" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.227399 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.360084 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2ckv\" (UniqueName: \"kubernetes.io/projected/2e8e9e91-5959-4640-8cea-d21f383c0c54-kube-api-access-s2ckv\") pod \"2e8e9e91-5959-4640-8cea-d21f383c0c54\" (UID: \"2e8e9e91-5959-4640-8cea-d21f383c0c54\") " Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.360213 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e8e9e91-5959-4640-8cea-d21f383c0c54-inventory\") pod \"2e8e9e91-5959-4640-8cea-d21f383c0c54\" (UID: \"2e8e9e91-5959-4640-8cea-d21f383c0c54\") " Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.360302 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e8e9e91-5959-4640-8cea-d21f383c0c54-ssh-key\") pod \"2e8e9e91-5959-4640-8cea-d21f383c0c54\" (UID: \"2e8e9e91-5959-4640-8cea-d21f383c0c54\") " Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.367007 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e8e9e91-5959-4640-8cea-d21f383c0c54-kube-api-access-s2ckv" (OuterVolumeSpecName: "kube-api-access-s2ckv") pod "2e8e9e91-5959-4640-8cea-d21f383c0c54" (UID: "2e8e9e91-5959-4640-8cea-d21f383c0c54"). InnerVolumeSpecName "kube-api-access-s2ckv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.392948 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e8e9e91-5959-4640-8cea-d21f383c0c54-inventory" (OuterVolumeSpecName: "inventory") pod "2e8e9e91-5959-4640-8cea-d21f383c0c54" (UID: "2e8e9e91-5959-4640-8cea-d21f383c0c54"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.394951 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e8e9e91-5959-4640-8cea-d21f383c0c54-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2e8e9e91-5959-4640-8cea-d21f383c0c54" (UID: "2e8e9e91-5959-4640-8cea-d21f383c0c54"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.463856 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2ckv\" (UniqueName: \"kubernetes.io/projected/2e8e9e91-5959-4640-8cea-d21f383c0c54-kube-api-access-s2ckv\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.464137 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e8e9e91-5959-4640-8cea-d21f383c0c54-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.464204 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e8e9e91-5959-4640-8cea-d21f383c0c54-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.887963 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" event={"ID":"2e8e9e91-5959-4640-8cea-d21f383c0c54","Type":"ContainerDied","Data":"f30409e0c4e26c55386226c427c38956423423c18384a5999ed50580c325b33d"} Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.888319 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f30409e0c4e26c55386226c427c38956423423c18384a5999ed50580c325b33d" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.888035 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.949706 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq"] Nov 24 11:49:37 crc kubenswrapper[4678]: E1124 11:49:37.950401 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e8e9e91-5959-4640-8cea-d21f383c0c54" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.950428 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e8e9e91-5959-4640-8cea-d21f383c0c54" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.950767 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e8e9e91-5959-4640-8cea-d21f383c0c54" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.951958 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.954466 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.954684 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.954740 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.954866 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:49:37 crc kubenswrapper[4678]: I1124 11:49:37.964639 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq"] Nov 24 11:49:38 crc kubenswrapper[4678]: I1124 11:49:38.085118 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8188cfcf-b26c-4761-886d-786112eb4539-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qrmsq\" (UID: \"8188cfcf-b26c-4761-886d-786112eb4539\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" Nov 24 11:49:38 crc kubenswrapper[4678]: I1124 11:49:38.085197 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc6ng\" (UniqueName: \"kubernetes.io/projected/8188cfcf-b26c-4761-886d-786112eb4539-kube-api-access-tc6ng\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qrmsq\" (UID: \"8188cfcf-b26c-4761-886d-786112eb4539\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" Nov 24 11:49:38 crc kubenswrapper[4678]: I1124 11:49:38.085286 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8188cfcf-b26c-4761-886d-786112eb4539-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qrmsq\" (UID: \"8188cfcf-b26c-4761-886d-786112eb4539\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" Nov 24 11:49:38 crc kubenswrapper[4678]: I1124 11:49:38.187950 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8188cfcf-b26c-4761-886d-786112eb4539-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qrmsq\" (UID: \"8188cfcf-b26c-4761-886d-786112eb4539\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" Nov 24 11:49:38 crc kubenswrapper[4678]: I1124 11:49:38.188080 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc6ng\" (UniqueName: \"kubernetes.io/projected/8188cfcf-b26c-4761-886d-786112eb4539-kube-api-access-tc6ng\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qrmsq\" (UID: \"8188cfcf-b26c-4761-886d-786112eb4539\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" Nov 24 11:49:38 crc kubenswrapper[4678]: I1124 11:49:38.188246 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8188cfcf-b26c-4761-886d-786112eb4539-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qrmsq\" (UID: \"8188cfcf-b26c-4761-886d-786112eb4539\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" Nov 24 11:49:38 crc kubenswrapper[4678]: I1124 11:49:38.192609 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8188cfcf-b26c-4761-886d-786112eb4539-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qrmsq\" (UID: \"8188cfcf-b26c-4761-886d-786112eb4539\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" Nov 24 11:49:38 crc kubenswrapper[4678]: I1124 11:49:38.192797 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8188cfcf-b26c-4761-886d-786112eb4539-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qrmsq\" (UID: \"8188cfcf-b26c-4761-886d-786112eb4539\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" Nov 24 11:49:38 crc kubenswrapper[4678]: I1124 11:49:38.205774 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc6ng\" (UniqueName: \"kubernetes.io/projected/8188cfcf-b26c-4761-886d-786112eb4539-kube-api-access-tc6ng\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qrmsq\" (UID: \"8188cfcf-b26c-4761-886d-786112eb4539\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" Nov 24 11:49:38 crc kubenswrapper[4678]: I1124 11:49:38.283602 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" Nov 24 11:49:38 crc kubenswrapper[4678]: I1124 11:49:38.960313 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq"] Nov 24 11:49:39 crc kubenswrapper[4678]: I1124 11:49:39.914107 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" event={"ID":"8188cfcf-b26c-4761-886d-786112eb4539","Type":"ContainerStarted","Data":"e747418500713c6fbd0d1d70519f161c8a3569fb85b85d4ebc6eca04a44a0a0f"} Nov 24 11:49:39 crc kubenswrapper[4678]: I1124 11:49:39.914401 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" event={"ID":"8188cfcf-b26c-4761-886d-786112eb4539","Type":"ContainerStarted","Data":"d5dc4f9dcef68e607ee83a106e49da3178e94afc95928ca3643ebcde5bb5fee5"} Nov 24 11:49:39 crc kubenswrapper[4678]: I1124 11:49:39.964955 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" podStartSLOduration=2.356898411 podStartE2EDuration="2.964932341s" podCreationTimestamp="2025-11-24 11:49:37 +0000 UTC" firstStartedPulling="2025-11-24 11:49:38.956602781 +0000 UTC m=+1989.887662420" lastFinishedPulling="2025-11-24 11:49:39.564636721 +0000 UTC m=+1990.495696350" observedRunningTime="2025-11-24 11:49:39.94871269 +0000 UTC m=+1990.879772339" watchObservedRunningTime="2025-11-24 11:49:39.964932341 +0000 UTC m=+1990.895991990" Nov 24 11:49:40 crc kubenswrapper[4678]: I1124 11:49:40.050607 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-frnfg"] Nov 24 11:49:40 crc kubenswrapper[4678]: I1124 11:49:40.059155 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-frnfg"] Nov 24 11:49:41 crc kubenswrapper[4678]: I1124 11:49:41.922908 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c07a289-92fa-4945-a0d6-fa2524b0492f" path="/var/lib/kubelet/pods/7c07a289-92fa-4945-a0d6-fa2524b0492f/volumes" Nov 24 11:49:42 crc kubenswrapper[4678]: I1124 11:49:42.031774 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-4187-account-create-w4jn6"] Nov 24 11:49:42 crc kubenswrapper[4678]: I1124 11:49:42.041920 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-4187-account-create-w4jn6"] Nov 24 11:49:42 crc kubenswrapper[4678]: I1124 11:49:42.050959 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-cw5kl"] Nov 24 11:49:42 crc kubenswrapper[4678]: I1124 11:49:42.059966 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-cw5kl"] Nov 24 11:49:43 crc kubenswrapper[4678]: I1124 11:49:43.939562 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e0d74d3-3a32-4293-a8a7-53b6f541cbdd" path="/var/lib/kubelet/pods/6e0d74d3-3a32-4293-a8a7-53b6f541cbdd/volumes" Nov 24 11:49:43 crc kubenswrapper[4678]: I1124 11:49:43.944462 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78946c61-158b-4f91-8717-cffd82196ea0" path="/var/lib/kubelet/pods/78946c61-158b-4f91-8717-cffd82196ea0/volumes" Nov 24 11:49:45 crc kubenswrapper[4678]: I1124 11:49:45.035149 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mptk6"] Nov 24 11:49:45 crc kubenswrapper[4678]: I1124 11:49:45.045935 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mptk6"] Nov 24 11:49:45 crc kubenswrapper[4678]: I1124 11:49:45.910567 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a390c1f-e5b4-47a0-a9e8-a9979475fbab" path="/var/lib/kubelet/pods/0a390c1f-e5b4-47a0-a9e8-a9979475fbab/volumes" Nov 24 11:50:16 crc kubenswrapper[4678]: I1124 11:50:16.349250 4678 generic.go:334] "Generic (PLEG): container finished" podID="8188cfcf-b26c-4761-886d-786112eb4539" containerID="e747418500713c6fbd0d1d70519f161c8a3569fb85b85d4ebc6eca04a44a0a0f" exitCode=0 Nov 24 11:50:16 crc kubenswrapper[4678]: I1124 11:50:16.349301 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" event={"ID":"8188cfcf-b26c-4761-886d-786112eb4539","Type":"ContainerDied","Data":"e747418500713c6fbd0d1d70519f161c8a3569fb85b85d4ebc6eca04a44a0a0f"} Nov 24 11:50:17 crc kubenswrapper[4678]: I1124 11:50:17.927715 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.080891 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8188cfcf-b26c-4761-886d-786112eb4539-inventory\") pod \"8188cfcf-b26c-4761-886d-786112eb4539\" (UID: \"8188cfcf-b26c-4761-886d-786112eb4539\") " Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.081032 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8188cfcf-b26c-4761-886d-786112eb4539-ssh-key\") pod \"8188cfcf-b26c-4761-886d-786112eb4539\" (UID: \"8188cfcf-b26c-4761-886d-786112eb4539\") " Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.081107 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc6ng\" (UniqueName: \"kubernetes.io/projected/8188cfcf-b26c-4761-886d-786112eb4539-kube-api-access-tc6ng\") pod \"8188cfcf-b26c-4761-886d-786112eb4539\" (UID: \"8188cfcf-b26c-4761-886d-786112eb4539\") " Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.086663 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8188cfcf-b26c-4761-886d-786112eb4539-kube-api-access-tc6ng" (OuterVolumeSpecName: "kube-api-access-tc6ng") pod "8188cfcf-b26c-4761-886d-786112eb4539" (UID: "8188cfcf-b26c-4761-886d-786112eb4539"). InnerVolumeSpecName "kube-api-access-tc6ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.114296 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8188cfcf-b26c-4761-886d-786112eb4539-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8188cfcf-b26c-4761-886d-786112eb4539" (UID: "8188cfcf-b26c-4761-886d-786112eb4539"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.121825 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8188cfcf-b26c-4761-886d-786112eb4539-inventory" (OuterVolumeSpecName: "inventory") pod "8188cfcf-b26c-4761-886d-786112eb4539" (UID: "8188cfcf-b26c-4761-886d-786112eb4539"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.184608 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8188cfcf-b26c-4761-886d-786112eb4539-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.184758 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8188cfcf-b26c-4761-886d-786112eb4539-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.184849 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc6ng\" (UniqueName: \"kubernetes.io/projected/8188cfcf-b26c-4761-886d-786112eb4539-kube-api-access-tc6ng\") on node \"crc\" DevicePath \"\"" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.380296 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" event={"ID":"8188cfcf-b26c-4761-886d-786112eb4539","Type":"ContainerDied","Data":"d5dc4f9dcef68e607ee83a106e49da3178e94afc95928ca3643ebcde5bb5fee5"} Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.380344 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5dc4f9dcef68e607ee83a106e49da3178e94afc95928ca3643ebcde5bb5fee5" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.380425 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qrmsq" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.489611 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5"] Nov 24 11:50:18 crc kubenswrapper[4678]: E1124 11:50:18.490300 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8188cfcf-b26c-4761-886d-786112eb4539" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.490320 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="8188cfcf-b26c-4761-886d-786112eb4539" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.490570 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="8188cfcf-b26c-4761-886d-786112eb4539" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.491604 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.497061 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.497183 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.497511 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.497786 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.500860 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5"] Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.594636 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5\" (UID: \"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.594710 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df594\" (UniqueName: \"kubernetes.io/projected/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-kube-api-access-df594\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5\" (UID: \"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.594756 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5\" (UID: \"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.697053 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5\" (UID: \"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.697574 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df594\" (UniqueName: \"kubernetes.io/projected/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-kube-api-access-df594\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5\" (UID: \"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.697614 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5\" (UID: \"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.704332 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5\" (UID: \"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.704804 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5\" (UID: \"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.716124 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df594\" (UniqueName: \"kubernetes.io/projected/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-kube-api-access-df594\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5\" (UID: \"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" Nov 24 11:50:18 crc kubenswrapper[4678]: I1124 11:50:18.809044 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" Nov 24 11:50:19 crc kubenswrapper[4678]: I1124 11:50:19.457632 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5"] Nov 24 11:50:20 crc kubenswrapper[4678]: I1124 11:50:20.407310 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" event={"ID":"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a","Type":"ContainerStarted","Data":"3d415dfc9bb0eefb4071909d6f5c3686097abeb21ad2a0098cf4a94d39392f69"} Nov 24 11:50:20 crc kubenswrapper[4678]: I1124 11:50:20.407627 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" event={"ID":"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a","Type":"ContainerStarted","Data":"531ae697f51530ea7d298ecc5b250b885ecbebbb2d6880c5830b563996bdd49d"} Nov 24 11:50:20 crc kubenswrapper[4678]: I1124 11:50:20.428603 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" podStartSLOduration=1.803372336 podStartE2EDuration="2.428585933s" podCreationTimestamp="2025-11-24 11:50:18 +0000 UTC" firstStartedPulling="2025-11-24 11:50:19.445335898 +0000 UTC m=+2030.376395557" lastFinishedPulling="2025-11-24 11:50:20.070549515 +0000 UTC m=+2031.001609154" observedRunningTime="2025-11-24 11:50:20.423122458 +0000 UTC m=+2031.354182117" watchObservedRunningTime="2025-11-24 11:50:20.428585933 +0000 UTC m=+2031.359645572" Nov 24 11:50:25 crc kubenswrapper[4678]: I1124 11:50:25.042226 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-qn8sk"] Nov 24 11:50:25 crc kubenswrapper[4678]: I1124 11:50:25.052725 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-qn8sk"] Nov 24 11:50:25 crc kubenswrapper[4678]: I1124 11:50:25.917634 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4c80df-952c-4b91-9957-5629417ef13a" path="/var/lib/kubelet/pods/9d4c80df-952c-4b91-9957-5629417ef13a/volumes" Nov 24 11:50:36 crc kubenswrapper[4678]: I1124 11:50:36.945998 4678 scope.go:117] "RemoveContainer" containerID="9a9c24edc7320c63a99e052fc4a677b5f4235aa9df14f2e71abc1cd7c87f36b8" Nov 24 11:50:36 crc kubenswrapper[4678]: I1124 11:50:36.996448 4678 scope.go:117] "RemoveContainer" containerID="24cdb485621eb9e22ea5b3a8bbef6fd71b4f914a3550122c642af1706945bcac" Nov 24 11:50:37 crc kubenswrapper[4678]: I1124 11:50:37.033733 4678 scope.go:117] "RemoveContainer" containerID="b71e95ad1773190481a5e9b395e2aa833103646e5c626528943357a7946244db" Nov 24 11:50:37 crc kubenswrapper[4678]: I1124 11:50:37.100204 4678 scope.go:117] "RemoveContainer" containerID="96c297a54196c1f5d7d6fa0ed71e695c0f34d0a588925a2d19693463f1854bb3" Nov 24 11:50:37 crc kubenswrapper[4678]: I1124 11:50:37.140927 4678 scope.go:117] "RemoveContainer" containerID="b841e9b8aba88f1d070e9f7584f8b758b33955b65305660616f6b2d6ccffc3f1" Nov 24 11:50:53 crc kubenswrapper[4678]: I1124 11:50:53.889112 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gl86d"] Nov 24 11:50:53 crc kubenswrapper[4678]: I1124 11:50:53.893169 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 11:50:53 crc kubenswrapper[4678]: I1124 11:50:53.909287 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gl86d"] Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.043960 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7psz8\" (UniqueName: \"kubernetes.io/projected/e8671688-d21a-471d-a7ef-aa87d927f001-kube-api-access-7psz8\") pod \"redhat-operators-gl86d\" (UID: \"e8671688-d21a-471d-a7ef-aa87d927f001\") " pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.044081 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8671688-d21a-471d-a7ef-aa87d927f001-catalog-content\") pod \"redhat-operators-gl86d\" (UID: \"e8671688-d21a-471d-a7ef-aa87d927f001\") " pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.044152 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8671688-d21a-471d-a7ef-aa87d927f001-utilities\") pod \"redhat-operators-gl86d\" (UID: \"e8671688-d21a-471d-a7ef-aa87d927f001\") " pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.085069 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-458z9"] Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.087524 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.105973 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-458z9"] Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.145978 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8671688-d21a-471d-a7ef-aa87d927f001-utilities\") pod \"redhat-operators-gl86d\" (UID: \"e8671688-d21a-471d-a7ef-aa87d927f001\") " pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.146130 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7psz8\" (UniqueName: \"kubernetes.io/projected/e8671688-d21a-471d-a7ef-aa87d927f001-kube-api-access-7psz8\") pod \"redhat-operators-gl86d\" (UID: \"e8671688-d21a-471d-a7ef-aa87d927f001\") " pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.146219 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8671688-d21a-471d-a7ef-aa87d927f001-catalog-content\") pod \"redhat-operators-gl86d\" (UID: \"e8671688-d21a-471d-a7ef-aa87d927f001\") " pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.146493 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8671688-d21a-471d-a7ef-aa87d927f001-utilities\") pod \"redhat-operators-gl86d\" (UID: \"e8671688-d21a-471d-a7ef-aa87d927f001\") " pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.146632 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8671688-d21a-471d-a7ef-aa87d927f001-catalog-content\") pod \"redhat-operators-gl86d\" (UID: \"e8671688-d21a-471d-a7ef-aa87d927f001\") " pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.164948 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7psz8\" (UniqueName: \"kubernetes.io/projected/e8671688-d21a-471d-a7ef-aa87d927f001-kube-api-access-7psz8\") pod \"redhat-operators-gl86d\" (UID: \"e8671688-d21a-471d-a7ef-aa87d927f001\") " pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.248053 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl6rq\" (UniqueName: \"kubernetes.io/projected/2727dd57-33a9-4273-81bf-c7fcf6695455-kube-api-access-pl6rq\") pod \"redhat-marketplace-458z9\" (UID: \"2727dd57-33a9-4273-81bf-c7fcf6695455\") " pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.248797 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2727dd57-33a9-4273-81bf-c7fcf6695455-catalog-content\") pod \"redhat-marketplace-458z9\" (UID: \"2727dd57-33a9-4273-81bf-c7fcf6695455\") " pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.248831 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2727dd57-33a9-4273-81bf-c7fcf6695455-utilities\") pod \"redhat-marketplace-458z9\" (UID: \"2727dd57-33a9-4273-81bf-c7fcf6695455\") " pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.259123 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.351868 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl6rq\" (UniqueName: \"kubernetes.io/projected/2727dd57-33a9-4273-81bf-c7fcf6695455-kube-api-access-pl6rq\") pod \"redhat-marketplace-458z9\" (UID: \"2727dd57-33a9-4273-81bf-c7fcf6695455\") " pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.351961 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2727dd57-33a9-4273-81bf-c7fcf6695455-catalog-content\") pod \"redhat-marketplace-458z9\" (UID: \"2727dd57-33a9-4273-81bf-c7fcf6695455\") " pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.351997 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2727dd57-33a9-4273-81bf-c7fcf6695455-utilities\") pod \"redhat-marketplace-458z9\" (UID: \"2727dd57-33a9-4273-81bf-c7fcf6695455\") " pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.352734 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2727dd57-33a9-4273-81bf-c7fcf6695455-catalog-content\") pod \"redhat-marketplace-458z9\" (UID: \"2727dd57-33a9-4273-81bf-c7fcf6695455\") " pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.352789 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2727dd57-33a9-4273-81bf-c7fcf6695455-utilities\") pod \"redhat-marketplace-458z9\" (UID: \"2727dd57-33a9-4273-81bf-c7fcf6695455\") " pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.375771 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl6rq\" (UniqueName: \"kubernetes.io/projected/2727dd57-33a9-4273-81bf-c7fcf6695455-kube-api-access-pl6rq\") pod \"redhat-marketplace-458z9\" (UID: \"2727dd57-33a9-4273-81bf-c7fcf6695455\") " pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.409228 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.810165 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gl86d"] Nov 24 11:50:54 crc kubenswrapper[4678]: I1124 11:50:54.852784 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl86d" event={"ID":"e8671688-d21a-471d-a7ef-aa87d927f001","Type":"ContainerStarted","Data":"d656272cc109a6db0414d2780b84da5cdae25ecc78dfb11b14ea1c18be57bcca"} Nov 24 11:50:55 crc kubenswrapper[4678]: I1124 11:50:55.112551 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-458z9"] Nov 24 11:50:55 crc kubenswrapper[4678]: W1124 11:50:55.180552 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2727dd57_33a9_4273_81bf_c7fcf6695455.slice/crio-2f6b9b0e94c7b1f3634244aa42293facbbada92ce0bcb99995c25c47a47ff964 WatchSource:0}: Error finding container 2f6b9b0e94c7b1f3634244aa42293facbbada92ce0bcb99995c25c47a47ff964: Status 404 returned error can't find the container with id 2f6b9b0e94c7b1f3634244aa42293facbbada92ce0bcb99995c25c47a47ff964 Nov 24 11:50:55 crc kubenswrapper[4678]: I1124 11:50:55.868813 4678 generic.go:334] "Generic (PLEG): container finished" podID="2727dd57-33a9-4273-81bf-c7fcf6695455" containerID="b74de7b9b0af04c1ef2148a36222a26d0fa0f1eb9d0869e3f6fd34705b188aad" exitCode=0 Nov 24 11:50:55 crc kubenswrapper[4678]: I1124 11:50:55.868873 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-458z9" event={"ID":"2727dd57-33a9-4273-81bf-c7fcf6695455","Type":"ContainerDied","Data":"b74de7b9b0af04c1ef2148a36222a26d0fa0f1eb9d0869e3f6fd34705b188aad"} Nov 24 11:50:55 crc kubenswrapper[4678]: I1124 11:50:55.869753 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-458z9" event={"ID":"2727dd57-33a9-4273-81bf-c7fcf6695455","Type":"ContainerStarted","Data":"2f6b9b0e94c7b1f3634244aa42293facbbada92ce0bcb99995c25c47a47ff964"} Nov 24 11:50:55 crc kubenswrapper[4678]: I1124 11:50:55.875271 4678 generic.go:334] "Generic (PLEG): container finished" podID="e8671688-d21a-471d-a7ef-aa87d927f001" containerID="111d277ea07d7e7e7af168eefae147fa643e986b444a55e5643a705908bc6870" exitCode=0 Nov 24 11:50:55 crc kubenswrapper[4678]: I1124 11:50:55.875317 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl86d" event={"ID":"e8671688-d21a-471d-a7ef-aa87d927f001","Type":"ContainerDied","Data":"111d277ea07d7e7e7af168eefae147fa643e986b444a55e5643a705908bc6870"} Nov 24 11:50:56 crc kubenswrapper[4678]: I1124 11:50:56.288851 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tm4qh"] Nov 24 11:50:56 crc kubenswrapper[4678]: I1124 11:50:56.291654 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:50:56 crc kubenswrapper[4678]: I1124 11:50:56.302320 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zptz8\" (UniqueName: \"kubernetes.io/projected/33e64aad-d54d-471b-9e1a-622e63ea3598-kube-api-access-zptz8\") pod \"community-operators-tm4qh\" (UID: \"33e64aad-d54d-471b-9e1a-622e63ea3598\") " pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:50:56 crc kubenswrapper[4678]: I1124 11:50:56.302368 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33e64aad-d54d-471b-9e1a-622e63ea3598-utilities\") pod \"community-operators-tm4qh\" (UID: \"33e64aad-d54d-471b-9e1a-622e63ea3598\") " pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:50:56 crc kubenswrapper[4678]: I1124 11:50:56.303135 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33e64aad-d54d-471b-9e1a-622e63ea3598-catalog-content\") pod \"community-operators-tm4qh\" (UID: \"33e64aad-d54d-471b-9e1a-622e63ea3598\") " pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:50:56 crc kubenswrapper[4678]: I1124 11:50:56.308630 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tm4qh"] Nov 24 11:50:56 crc kubenswrapper[4678]: I1124 11:50:56.406137 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33e64aad-d54d-471b-9e1a-622e63ea3598-catalog-content\") pod \"community-operators-tm4qh\" (UID: \"33e64aad-d54d-471b-9e1a-622e63ea3598\") " pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:50:56 crc kubenswrapper[4678]: I1124 11:50:56.406291 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zptz8\" (UniqueName: \"kubernetes.io/projected/33e64aad-d54d-471b-9e1a-622e63ea3598-kube-api-access-zptz8\") pod \"community-operators-tm4qh\" (UID: \"33e64aad-d54d-471b-9e1a-622e63ea3598\") " pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:50:56 crc kubenswrapper[4678]: I1124 11:50:56.406328 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33e64aad-d54d-471b-9e1a-622e63ea3598-utilities\") pod \"community-operators-tm4qh\" (UID: \"33e64aad-d54d-471b-9e1a-622e63ea3598\") " pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:50:56 crc kubenswrapper[4678]: I1124 11:50:56.406692 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33e64aad-d54d-471b-9e1a-622e63ea3598-catalog-content\") pod \"community-operators-tm4qh\" (UID: \"33e64aad-d54d-471b-9e1a-622e63ea3598\") " pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:50:56 crc kubenswrapper[4678]: I1124 11:50:56.407031 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33e64aad-d54d-471b-9e1a-622e63ea3598-utilities\") pod \"community-operators-tm4qh\" (UID: \"33e64aad-d54d-471b-9e1a-622e63ea3598\") " pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:50:56 crc kubenswrapper[4678]: I1124 11:50:56.432423 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zptz8\" (UniqueName: \"kubernetes.io/projected/33e64aad-d54d-471b-9e1a-622e63ea3598-kube-api-access-zptz8\") pod \"community-operators-tm4qh\" (UID: \"33e64aad-d54d-471b-9e1a-622e63ea3598\") " pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:50:56 crc kubenswrapper[4678]: I1124 11:50:56.651828 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:50:56 crc kubenswrapper[4678]: I1124 11:50:56.913864 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-458z9" event={"ID":"2727dd57-33a9-4273-81bf-c7fcf6695455","Type":"ContainerStarted","Data":"fb587294daadd7e86d717aa16471cd30f32c056be2e6ad5c4a39a6bc2db40214"} Nov 24 11:50:57 crc kubenswrapper[4678]: I1124 11:50:57.253972 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tm4qh"] Nov 24 11:50:57 crc kubenswrapper[4678]: I1124 11:50:57.926645 4678 generic.go:334] "Generic (PLEG): container finished" podID="2727dd57-33a9-4273-81bf-c7fcf6695455" containerID="fb587294daadd7e86d717aa16471cd30f32c056be2e6ad5c4a39a6bc2db40214" exitCode=0 Nov 24 11:50:57 crc kubenswrapper[4678]: I1124 11:50:57.926708 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-458z9" event={"ID":"2727dd57-33a9-4273-81bf-c7fcf6695455","Type":"ContainerDied","Data":"fb587294daadd7e86d717aa16471cd30f32c056be2e6ad5c4a39a6bc2db40214"} Nov 24 11:50:57 crc kubenswrapper[4678]: I1124 11:50:57.931427 4678 generic.go:334] "Generic (PLEG): container finished" podID="33e64aad-d54d-471b-9e1a-622e63ea3598" containerID="baa21fcdc29929d14097e23814cf435aaeaa334d3ac125d77c2ac91e68f4f99d" exitCode=0 Nov 24 11:50:57 crc kubenswrapper[4678]: I1124 11:50:57.931476 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm4qh" event={"ID":"33e64aad-d54d-471b-9e1a-622e63ea3598","Type":"ContainerDied","Data":"baa21fcdc29929d14097e23814cf435aaeaa334d3ac125d77c2ac91e68f4f99d"} Nov 24 11:50:57 crc kubenswrapper[4678]: I1124 11:50:57.931512 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm4qh" event={"ID":"33e64aad-d54d-471b-9e1a-622e63ea3598","Type":"ContainerStarted","Data":"76e0ad4f2e083a3fd9d2070fabe24c8d11638baaa144b1e449fcd64c07cc3d58"} Nov 24 11:50:58 crc kubenswrapper[4678]: I1124 11:50:58.957086 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm4qh" event={"ID":"33e64aad-d54d-471b-9e1a-622e63ea3598","Type":"ContainerStarted","Data":"a4885d92bd7c04ec6120190f04cc7e98bfbc752b2807a17fc592f1e62af9bde3"} Nov 24 11:50:58 crc kubenswrapper[4678]: I1124 11:50:58.960927 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-458z9" event={"ID":"2727dd57-33a9-4273-81bf-c7fcf6695455","Type":"ContainerStarted","Data":"2f8699ea2cbcb42b7a710b6955d597bffcb3b94d45009049b5288588c05b46eb"} Nov 24 11:50:59 crc kubenswrapper[4678]: I1124 11:50:59.020797 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-458z9" podStartSLOduration=2.583375428 podStartE2EDuration="5.020770399s" podCreationTimestamp="2025-11-24 11:50:54 +0000 UTC" firstStartedPulling="2025-11-24 11:50:55.872651513 +0000 UTC m=+2066.803711152" lastFinishedPulling="2025-11-24 11:50:58.310046494 +0000 UTC m=+2069.241106123" observedRunningTime="2025-11-24 11:50:59.014177454 +0000 UTC m=+2069.945237103" watchObservedRunningTime="2025-11-24 11:50:59.020770399 +0000 UTC m=+2069.951830038" Nov 24 11:51:02 crc kubenswrapper[4678]: I1124 11:51:02.006127 4678 generic.go:334] "Generic (PLEG): container finished" podID="33e64aad-d54d-471b-9e1a-622e63ea3598" containerID="a4885d92bd7c04ec6120190f04cc7e98bfbc752b2807a17fc592f1e62af9bde3" exitCode=0 Nov 24 11:51:02 crc kubenswrapper[4678]: I1124 11:51:02.006216 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm4qh" event={"ID":"33e64aad-d54d-471b-9e1a-622e63ea3598","Type":"ContainerDied","Data":"a4885d92bd7c04ec6120190f04cc7e98bfbc752b2807a17fc592f1e62af9bde3"} Nov 24 11:51:04 crc kubenswrapper[4678]: I1124 11:51:04.411127 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:51:04 crc kubenswrapper[4678]: I1124 11:51:04.411777 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:51:04 crc kubenswrapper[4678]: I1124 11:51:04.467019 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:51:05 crc kubenswrapper[4678]: I1124 11:51:05.092303 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xrj5c"] Nov 24 11:51:05 crc kubenswrapper[4678]: I1124 11:51:05.095316 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:05 crc kubenswrapper[4678]: I1124 11:51:05.106986 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xrj5c"] Nov 24 11:51:05 crc kubenswrapper[4678]: I1124 11:51:05.143008 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:51:05 crc kubenswrapper[4678]: I1124 11:51:05.256291 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-utilities\") pod \"certified-operators-xrj5c\" (UID: \"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442\") " pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:05 crc kubenswrapper[4678]: I1124 11:51:05.256452 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jl9z\" (UniqueName: \"kubernetes.io/projected/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-kube-api-access-7jl9z\") pod \"certified-operators-xrj5c\" (UID: \"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442\") " pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:05 crc kubenswrapper[4678]: I1124 11:51:05.256626 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-catalog-content\") pod \"certified-operators-xrj5c\" (UID: \"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442\") " pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:05 crc kubenswrapper[4678]: I1124 11:51:05.358806 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jl9z\" (UniqueName: \"kubernetes.io/projected/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-kube-api-access-7jl9z\") pod \"certified-operators-xrj5c\" (UID: \"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442\") " pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:05 crc kubenswrapper[4678]: I1124 11:51:05.358978 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-catalog-content\") pod \"certified-operators-xrj5c\" (UID: \"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442\") " pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:05 crc kubenswrapper[4678]: I1124 11:51:05.359013 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-utilities\") pod \"certified-operators-xrj5c\" (UID: \"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442\") " pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:05 crc kubenswrapper[4678]: I1124 11:51:05.359606 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-catalog-content\") pod \"certified-operators-xrj5c\" (UID: \"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442\") " pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:05 crc kubenswrapper[4678]: I1124 11:51:05.359738 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-utilities\") pod \"certified-operators-xrj5c\" (UID: \"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442\") " pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:05 crc kubenswrapper[4678]: I1124 11:51:05.378551 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jl9z\" (UniqueName: \"kubernetes.io/projected/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-kube-api-access-7jl9z\") pod \"certified-operators-xrj5c\" (UID: \"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442\") " pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:05 crc kubenswrapper[4678]: I1124 11:51:05.429102 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:06 crc kubenswrapper[4678]: I1124 11:51:06.878284 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-458z9"] Nov 24 11:51:07 crc kubenswrapper[4678]: I1124 11:51:07.101781 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-458z9" podUID="2727dd57-33a9-4273-81bf-c7fcf6695455" containerName="registry-server" containerID="cri-o://2f8699ea2cbcb42b7a710b6955d597bffcb3b94d45009049b5288588c05b46eb" gracePeriod=2 Nov 24 11:51:08 crc kubenswrapper[4678]: I1124 11:51:08.113653 4678 generic.go:334] "Generic (PLEG): container finished" podID="2727dd57-33a9-4273-81bf-c7fcf6695455" containerID="2f8699ea2cbcb42b7a710b6955d597bffcb3b94d45009049b5288588c05b46eb" exitCode=0 Nov 24 11:51:08 crc kubenswrapper[4678]: I1124 11:51:08.113731 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-458z9" event={"ID":"2727dd57-33a9-4273-81bf-c7fcf6695455","Type":"ContainerDied","Data":"2f8699ea2cbcb42b7a710b6955d597bffcb3b94d45009049b5288588c05b46eb"} Nov 24 11:51:09 crc kubenswrapper[4678]: I1124 11:51:09.153890 4678 generic.go:334] "Generic (PLEG): container finished" podID="a55cd4bf-43a3-4ba5-a44e-6531b7e6740a" containerID="3d415dfc9bb0eefb4071909d6f5c3686097abeb21ad2a0098cf4a94d39392f69" exitCode=0 Nov 24 11:51:09 crc kubenswrapper[4678]: I1124 11:51:09.155378 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" event={"ID":"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a","Type":"ContainerDied","Data":"3d415dfc9bb0eefb4071909d6f5c3686097abeb21ad2a0098cf4a94d39392f69"} Nov 24 11:51:09 crc kubenswrapper[4678]: I1124 11:51:09.249470 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:51:09 crc kubenswrapper[4678]: I1124 11:51:09.360415 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2727dd57-33a9-4273-81bf-c7fcf6695455-utilities\") pod \"2727dd57-33a9-4273-81bf-c7fcf6695455\" (UID: \"2727dd57-33a9-4273-81bf-c7fcf6695455\") " Nov 24 11:51:09 crc kubenswrapper[4678]: I1124 11:51:09.360522 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl6rq\" (UniqueName: \"kubernetes.io/projected/2727dd57-33a9-4273-81bf-c7fcf6695455-kube-api-access-pl6rq\") pod \"2727dd57-33a9-4273-81bf-c7fcf6695455\" (UID: \"2727dd57-33a9-4273-81bf-c7fcf6695455\") " Nov 24 11:51:09 crc kubenswrapper[4678]: I1124 11:51:09.360639 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2727dd57-33a9-4273-81bf-c7fcf6695455-catalog-content\") pod \"2727dd57-33a9-4273-81bf-c7fcf6695455\" (UID: \"2727dd57-33a9-4273-81bf-c7fcf6695455\") " Nov 24 11:51:09 crc kubenswrapper[4678]: I1124 11:51:09.362094 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2727dd57-33a9-4273-81bf-c7fcf6695455-utilities" (OuterVolumeSpecName: "utilities") pod "2727dd57-33a9-4273-81bf-c7fcf6695455" (UID: "2727dd57-33a9-4273-81bf-c7fcf6695455"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:51:09 crc kubenswrapper[4678]: I1124 11:51:09.367942 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2727dd57-33a9-4273-81bf-c7fcf6695455-kube-api-access-pl6rq" (OuterVolumeSpecName: "kube-api-access-pl6rq") pod "2727dd57-33a9-4273-81bf-c7fcf6695455" (UID: "2727dd57-33a9-4273-81bf-c7fcf6695455"). InnerVolumeSpecName "kube-api-access-pl6rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:51:09 crc kubenswrapper[4678]: I1124 11:51:09.371957 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2727dd57-33a9-4273-81bf-c7fcf6695455-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:09 crc kubenswrapper[4678]: I1124 11:51:09.371985 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl6rq\" (UniqueName: \"kubernetes.io/projected/2727dd57-33a9-4273-81bf-c7fcf6695455-kube-api-access-pl6rq\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:09 crc kubenswrapper[4678]: I1124 11:51:09.377001 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2727dd57-33a9-4273-81bf-c7fcf6695455-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2727dd57-33a9-4273-81bf-c7fcf6695455" (UID: "2727dd57-33a9-4273-81bf-c7fcf6695455"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:51:09 crc kubenswrapper[4678]: W1124 11:51:09.409359 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90dd0a42_3c18_4e1a_a1d3_8c2ae1770442.slice/crio-46f8f791697f294e3295d5096b45b2aed5e392408a5731747649b05d19855d5a WatchSource:0}: Error finding container 46f8f791697f294e3295d5096b45b2aed5e392408a5731747649b05d19855d5a: Status 404 returned error can't find the container with id 46f8f791697f294e3295d5096b45b2aed5e392408a5731747649b05d19855d5a Nov 24 11:51:09 crc kubenswrapper[4678]: I1124 11:51:09.414816 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xrj5c"] Nov 24 11:51:09 crc kubenswrapper[4678]: I1124 11:51:09.486296 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2727dd57-33a9-4273-81bf-c7fcf6695455-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:10 crc kubenswrapper[4678]: I1124 11:51:10.167920 4678 generic.go:334] "Generic (PLEG): container finished" podID="90dd0a42-3c18-4e1a-a1d3-8c2ae1770442" containerID="df2ff05da0206859fe7f3222a27e5535213a096ae79844bcc25694c23682cd1d" exitCode=0 Nov 24 11:51:10 crc kubenswrapper[4678]: I1124 11:51:10.167968 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrj5c" event={"ID":"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442","Type":"ContainerDied","Data":"df2ff05da0206859fe7f3222a27e5535213a096ae79844bcc25694c23682cd1d"} Nov 24 11:51:10 crc kubenswrapper[4678]: I1124 11:51:10.168240 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrj5c" event={"ID":"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442","Type":"ContainerStarted","Data":"46f8f791697f294e3295d5096b45b2aed5e392408a5731747649b05d19855d5a"} Nov 24 11:51:10 crc kubenswrapper[4678]: I1124 11:51:10.172607 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm4qh" event={"ID":"33e64aad-d54d-471b-9e1a-622e63ea3598","Type":"ContainerStarted","Data":"ad403884765dda45dafcea9a6bce9f7d2a5d2ab64f4f08baec72f455771d2c0e"} Nov 24 11:51:10 crc kubenswrapper[4678]: I1124 11:51:10.175504 4678 generic.go:334] "Generic (PLEG): container finished" podID="e8671688-d21a-471d-a7ef-aa87d927f001" containerID="b31e628eee9d6bfd9180a0dd55b1faa6ee8931fc95d3cfb45820d5fd8623d0fa" exitCode=0 Nov 24 11:51:10 crc kubenswrapper[4678]: I1124 11:51:10.175590 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl86d" event={"ID":"e8671688-d21a-471d-a7ef-aa87d927f001","Type":"ContainerDied","Data":"b31e628eee9d6bfd9180a0dd55b1faa6ee8931fc95d3cfb45820d5fd8623d0fa"} Nov 24 11:51:10 crc kubenswrapper[4678]: I1124 11:51:10.179439 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-458z9" Nov 24 11:51:10 crc kubenswrapper[4678]: I1124 11:51:10.183825 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-458z9" event={"ID":"2727dd57-33a9-4273-81bf-c7fcf6695455","Type":"ContainerDied","Data":"2f6b9b0e94c7b1f3634244aa42293facbbada92ce0bcb99995c25c47a47ff964"} Nov 24 11:51:10 crc kubenswrapper[4678]: I1124 11:51:10.183893 4678 scope.go:117] "RemoveContainer" containerID="2f8699ea2cbcb42b7a710b6955d597bffcb3b94d45009049b5288588c05b46eb" Nov 24 11:51:10 crc kubenswrapper[4678]: I1124 11:51:10.267967 4678 scope.go:117] "RemoveContainer" containerID="fb587294daadd7e86d717aa16471cd30f32c056be2e6ad5c4a39a6bc2db40214" Nov 24 11:51:10 crc kubenswrapper[4678]: I1124 11:51:10.279847 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tm4qh" podStartSLOduration=3.030819321 podStartE2EDuration="14.279813665s" podCreationTimestamp="2025-11-24 11:50:56 +0000 UTC" firstStartedPulling="2025-11-24 11:50:57.934883231 +0000 UTC m=+2068.865942880" lastFinishedPulling="2025-11-24 11:51:09.183877585 +0000 UTC m=+2080.114937224" observedRunningTime="2025-11-24 11:51:10.224246586 +0000 UTC m=+2081.155306225" watchObservedRunningTime="2025-11-24 11:51:10.279813665 +0000 UTC m=+2081.210873304" Nov 24 11:51:10 crc kubenswrapper[4678]: I1124 11:51:10.301751 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-458z9"] Nov 24 11:51:10 crc kubenswrapper[4678]: I1124 11:51:10.311634 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-458z9"] Nov 24 11:51:10 crc kubenswrapper[4678]: I1124 11:51:10.342774 4678 scope.go:117] "RemoveContainer" containerID="b74de7b9b0af04c1ef2148a36222a26d0fa0f1eb9d0869e3f6fd34705b188aad" Nov 24 11:51:10 crc kubenswrapper[4678]: I1124 11:51:10.887059 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.036318 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-inventory\") pod \"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a\" (UID: \"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a\") " Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.036358 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df594\" (UniqueName: \"kubernetes.io/projected/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-kube-api-access-df594\") pod \"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a\" (UID: \"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a\") " Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.036577 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-ssh-key\") pod \"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a\" (UID: \"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a\") " Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.042401 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-kube-api-access-df594" (OuterVolumeSpecName: "kube-api-access-df594") pod "a55cd4bf-43a3-4ba5-a44e-6531b7e6740a" (UID: "a55cd4bf-43a3-4ba5-a44e-6531b7e6740a"). InnerVolumeSpecName "kube-api-access-df594". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.071801 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a55cd4bf-43a3-4ba5-a44e-6531b7e6740a" (UID: "a55cd4bf-43a3-4ba5-a44e-6531b7e6740a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.073075 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-inventory" (OuterVolumeSpecName: "inventory") pod "a55cd4bf-43a3-4ba5-a44e-6531b7e6740a" (UID: "a55cd4bf-43a3-4ba5-a44e-6531b7e6740a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.139883 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.139934 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.139948 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df594\" (UniqueName: \"kubernetes.io/projected/a55cd4bf-43a3-4ba5-a44e-6531b7e6740a-kube-api-access-df594\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.196770 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl86d" event={"ID":"e8671688-d21a-471d-a7ef-aa87d927f001","Type":"ContainerStarted","Data":"ba18c21817ba37cd975492b8047acaccfa6f9684ef94839f6f6c5c0b05845a72"} Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.198984 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.199128 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5" event={"ID":"a55cd4bf-43a3-4ba5-a44e-6531b7e6740a","Type":"ContainerDied","Data":"531ae697f51530ea7d298ecc5b250b885ecbebbb2d6880c5830b563996bdd49d"} Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.199272 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="531ae697f51530ea7d298ecc5b250b885ecbebbb2d6880c5830b563996bdd49d" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.241501 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gl86d" podStartSLOduration=3.453832607 podStartE2EDuration="18.241478639s" podCreationTimestamp="2025-11-24 11:50:53 +0000 UTC" firstStartedPulling="2025-11-24 11:50:55.876909706 +0000 UTC m=+2066.807969345" lastFinishedPulling="2025-11-24 11:51:10.664555738 +0000 UTC m=+2081.595615377" observedRunningTime="2025-11-24 11:51:11.219727696 +0000 UTC m=+2082.150787335" watchObservedRunningTime="2025-11-24 11:51:11.241478639 +0000 UTC m=+2082.172538278" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.309730 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mwg84"] Nov 24 11:51:11 crc kubenswrapper[4678]: E1124 11:51:11.310534 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2727dd57-33a9-4273-81bf-c7fcf6695455" containerName="extract-utilities" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.310574 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="2727dd57-33a9-4273-81bf-c7fcf6695455" containerName="extract-utilities" Nov 24 11:51:11 crc kubenswrapper[4678]: E1124 11:51:11.310599 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2727dd57-33a9-4273-81bf-c7fcf6695455" containerName="registry-server" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.310608 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="2727dd57-33a9-4273-81bf-c7fcf6695455" containerName="registry-server" Nov 24 11:51:11 crc kubenswrapper[4678]: E1124 11:51:11.310649 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2727dd57-33a9-4273-81bf-c7fcf6695455" containerName="extract-content" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.310659 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="2727dd57-33a9-4273-81bf-c7fcf6695455" containerName="extract-content" Nov 24 11:51:11 crc kubenswrapper[4678]: E1124 11:51:11.310739 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55cd4bf-43a3-4ba5-a44e-6531b7e6740a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.310750 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55cd4bf-43a3-4ba5-a44e-6531b7e6740a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.311168 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="2727dd57-33a9-4273-81bf-c7fcf6695455" containerName="registry-server" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.311191 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="a55cd4bf-43a3-4ba5-a44e-6531b7e6740a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.312656 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.319412 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mwg84"] Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.319750 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.319902 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.320041 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.320291 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.446215 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd152515-28eb-453c-a841-34dc603a3c3d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mwg84\" (UID: \"dd152515-28eb-453c-a841-34dc603a3c3d\") " pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.446308 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dd152515-28eb-453c-a841-34dc603a3c3d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mwg84\" (UID: \"dd152515-28eb-453c-a841-34dc603a3c3d\") " pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.446358 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7bw2\" (UniqueName: \"kubernetes.io/projected/dd152515-28eb-453c-a841-34dc603a3c3d-kube-api-access-p7bw2\") pod \"ssh-known-hosts-edpm-deployment-mwg84\" (UID: \"dd152515-28eb-453c-a841-34dc603a3c3d\") " pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.548491 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7bw2\" (UniqueName: \"kubernetes.io/projected/dd152515-28eb-453c-a841-34dc603a3c3d-kube-api-access-p7bw2\") pod \"ssh-known-hosts-edpm-deployment-mwg84\" (UID: \"dd152515-28eb-453c-a841-34dc603a3c3d\") " pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.548738 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd152515-28eb-453c-a841-34dc603a3c3d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mwg84\" (UID: \"dd152515-28eb-453c-a841-34dc603a3c3d\") " pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.549865 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dd152515-28eb-453c-a841-34dc603a3c3d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mwg84\" (UID: \"dd152515-28eb-453c-a841-34dc603a3c3d\") " pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.552890 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd152515-28eb-453c-a841-34dc603a3c3d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mwg84\" (UID: \"dd152515-28eb-453c-a841-34dc603a3c3d\") " pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.553535 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dd152515-28eb-453c-a841-34dc603a3c3d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mwg84\" (UID: \"dd152515-28eb-453c-a841-34dc603a3c3d\") " pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.563101 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7bw2\" (UniqueName: \"kubernetes.io/projected/dd152515-28eb-453c-a841-34dc603a3c3d-kube-api-access-p7bw2\") pod \"ssh-known-hosts-edpm-deployment-mwg84\" (UID: \"dd152515-28eb-453c-a841-34dc603a3c3d\") " pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.697035 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" Nov 24 11:51:11 crc kubenswrapper[4678]: I1124 11:51:11.924476 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2727dd57-33a9-4273-81bf-c7fcf6695455" path="/var/lib/kubelet/pods/2727dd57-33a9-4273-81bf-c7fcf6695455/volumes" Nov 24 11:51:12 crc kubenswrapper[4678]: I1124 11:51:12.216132 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrj5c" event={"ID":"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442","Type":"ContainerStarted","Data":"531efd2ce6945e86c8a7cd0b712eb126b8a53a39f0a2bc1ffa384d44a9ae2136"} Nov 24 11:51:12 crc kubenswrapper[4678]: I1124 11:51:12.311933 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mwg84"] Nov 24 11:51:12 crc kubenswrapper[4678]: W1124 11:51:12.317683 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd152515_28eb_453c_a841_34dc603a3c3d.slice/crio-a17bf5173e56db3b18108dc66ca00415284714508a06cfb7fef8ee24c71138b4 WatchSource:0}: Error finding container a17bf5173e56db3b18108dc66ca00415284714508a06cfb7fef8ee24c71138b4: Status 404 returned error can't find the container with id a17bf5173e56db3b18108dc66ca00415284714508a06cfb7fef8ee24c71138b4 Nov 24 11:51:13 crc kubenswrapper[4678]: I1124 11:51:13.232201 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" event={"ID":"dd152515-28eb-453c-a841-34dc603a3c3d","Type":"ContainerStarted","Data":"90abae6dede68a87ee2b2dac80b0cb5b9a2f1cb8817cafe7a38d34bbb232b42f"} Nov 24 11:51:13 crc kubenswrapper[4678]: I1124 11:51:13.232987 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" event={"ID":"dd152515-28eb-453c-a841-34dc603a3c3d","Type":"ContainerStarted","Data":"a17bf5173e56db3b18108dc66ca00415284714508a06cfb7fef8ee24c71138b4"} Nov 24 11:51:13 crc kubenswrapper[4678]: I1124 11:51:13.253815 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" podStartSLOduration=1.8161477000000001 podStartE2EDuration="2.253798019s" podCreationTimestamp="2025-11-24 11:51:11 +0000 UTC" firstStartedPulling="2025-11-24 11:51:12.321088111 +0000 UTC m=+2083.252147750" lastFinishedPulling="2025-11-24 11:51:12.75873843 +0000 UTC m=+2083.689798069" observedRunningTime="2025-11-24 11:51:13.24638801 +0000 UTC m=+2084.177447649" watchObservedRunningTime="2025-11-24 11:51:13.253798019 +0000 UTC m=+2084.184857658" Nov 24 11:51:14 crc kubenswrapper[4678]: I1124 11:51:14.260034 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 11:51:14 crc kubenswrapper[4678]: I1124 11:51:14.260283 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 11:51:15 crc kubenswrapper[4678]: I1124 11:51:15.257484 4678 generic.go:334] "Generic (PLEG): container finished" podID="90dd0a42-3c18-4e1a-a1d3-8c2ae1770442" containerID="531efd2ce6945e86c8a7cd0b712eb126b8a53a39f0a2bc1ffa384d44a9ae2136" exitCode=0 Nov 24 11:51:15 crc kubenswrapper[4678]: I1124 11:51:15.258120 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrj5c" event={"ID":"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442","Type":"ContainerDied","Data":"531efd2ce6945e86c8a7cd0b712eb126b8a53a39f0a2bc1ffa384d44a9ae2136"} Nov 24 11:51:15 crc kubenswrapper[4678]: I1124 11:51:15.316748 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gl86d" podUID="e8671688-d21a-471d-a7ef-aa87d927f001" containerName="registry-server" probeResult="failure" output=< Nov 24 11:51:15 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 11:51:15 crc kubenswrapper[4678]: > Nov 24 11:51:16 crc kubenswrapper[4678]: I1124 11:51:16.272958 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrj5c" event={"ID":"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442","Type":"ContainerStarted","Data":"9e2524cbe62f0f22f0e10c5317fb0ef4e9a3952abb9a87f82d7520dab9458381"} Nov 24 11:51:16 crc kubenswrapper[4678]: I1124 11:51:16.289740 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xrj5c" podStartSLOduration=5.778081828 podStartE2EDuration="11.289722141s" podCreationTimestamp="2025-11-24 11:51:05 +0000 UTC" firstStartedPulling="2025-11-24 11:51:10.170724883 +0000 UTC m=+2081.101784522" lastFinishedPulling="2025-11-24 11:51:15.682365196 +0000 UTC m=+2086.613424835" observedRunningTime="2025-11-24 11:51:16.289586677 +0000 UTC m=+2087.220646316" watchObservedRunningTime="2025-11-24 11:51:16.289722141 +0000 UTC m=+2087.220781780" Nov 24 11:51:16 crc kubenswrapper[4678]: I1124 11:51:16.652349 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:51:16 crc kubenswrapper[4678]: I1124 11:51:16.652418 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:51:17 crc kubenswrapper[4678]: I1124 11:51:17.796243 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tm4qh" podUID="33e64aad-d54d-471b-9e1a-622e63ea3598" containerName="registry-server" probeResult="failure" output=< Nov 24 11:51:17 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 11:51:17 crc kubenswrapper[4678]: > Nov 24 11:51:21 crc kubenswrapper[4678]: I1124 11:51:21.352376 4678 generic.go:334] "Generic (PLEG): container finished" podID="dd152515-28eb-453c-a841-34dc603a3c3d" containerID="90abae6dede68a87ee2b2dac80b0cb5b9a2f1cb8817cafe7a38d34bbb232b42f" exitCode=0 Nov 24 11:51:21 crc kubenswrapper[4678]: I1124 11:51:21.352485 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" event={"ID":"dd152515-28eb-453c-a841-34dc603a3c3d","Type":"ContainerDied","Data":"90abae6dede68a87ee2b2dac80b0cb5b9a2f1cb8817cafe7a38d34bbb232b42f"} Nov 24 11:51:22 crc kubenswrapper[4678]: I1124 11:51:22.829584 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" Nov 24 11:51:22 crc kubenswrapper[4678]: I1124 11:51:22.897641 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7bw2\" (UniqueName: \"kubernetes.io/projected/dd152515-28eb-453c-a841-34dc603a3c3d-kube-api-access-p7bw2\") pod \"dd152515-28eb-453c-a841-34dc603a3c3d\" (UID: \"dd152515-28eb-453c-a841-34dc603a3c3d\") " Nov 24 11:51:22 crc kubenswrapper[4678]: I1124 11:51:22.898010 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd152515-28eb-453c-a841-34dc603a3c3d-ssh-key-openstack-edpm-ipam\") pod \"dd152515-28eb-453c-a841-34dc603a3c3d\" (UID: \"dd152515-28eb-453c-a841-34dc603a3c3d\") " Nov 24 11:51:22 crc kubenswrapper[4678]: I1124 11:51:22.898102 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dd152515-28eb-453c-a841-34dc603a3c3d-inventory-0\") pod \"dd152515-28eb-453c-a841-34dc603a3c3d\" (UID: \"dd152515-28eb-453c-a841-34dc603a3c3d\") " Nov 24 11:51:22 crc kubenswrapper[4678]: I1124 11:51:22.907351 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd152515-28eb-453c-a841-34dc603a3c3d-kube-api-access-p7bw2" (OuterVolumeSpecName: "kube-api-access-p7bw2") pod "dd152515-28eb-453c-a841-34dc603a3c3d" (UID: "dd152515-28eb-453c-a841-34dc603a3c3d"). InnerVolumeSpecName "kube-api-access-p7bw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:51:22 crc kubenswrapper[4678]: I1124 11:51:22.938884 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd152515-28eb-453c-a841-34dc603a3c3d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "dd152515-28eb-453c-a841-34dc603a3c3d" (UID: "dd152515-28eb-453c-a841-34dc603a3c3d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:51:22 crc kubenswrapper[4678]: I1124 11:51:22.957989 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd152515-28eb-453c-a841-34dc603a3c3d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd152515-28eb-453c-a841-34dc603a3c3d" (UID: "dd152515-28eb-453c-a841-34dc603a3c3d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.000329 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7bw2\" (UniqueName: \"kubernetes.io/projected/dd152515-28eb-453c-a841-34dc603a3c3d-kube-api-access-p7bw2\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.000363 4678 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dd152515-28eb-453c-a841-34dc603a3c3d-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.000374 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd152515-28eb-453c-a841-34dc603a3c3d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.376385 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" event={"ID":"dd152515-28eb-453c-a841-34dc603a3c3d","Type":"ContainerDied","Data":"a17bf5173e56db3b18108dc66ca00415284714508a06cfb7fef8ee24c71138b4"} Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.376786 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a17bf5173e56db3b18108dc66ca00415284714508a06cfb7fef8ee24c71138b4" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.376487 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mwg84" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.473656 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2"] Nov 24 11:51:23 crc kubenswrapper[4678]: E1124 11:51:23.474252 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd152515-28eb-453c-a841-34dc603a3c3d" containerName="ssh-known-hosts-edpm-deployment" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.474277 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd152515-28eb-453c-a841-34dc603a3c3d" containerName="ssh-known-hosts-edpm-deployment" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.474520 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd152515-28eb-453c-a841-34dc603a3c3d" containerName="ssh-known-hosts-edpm-deployment" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.475365 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.478432 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.478850 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.478881 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.479707 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.500579 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2"] Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.514357 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc64h\" (UniqueName: \"kubernetes.io/projected/49c5d423-1095-46d6-9054-a1957402fd7e-kube-api-access-fc64h\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vxmd2\" (UID: \"49c5d423-1095-46d6-9054-a1957402fd7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.514498 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49c5d423-1095-46d6-9054-a1957402fd7e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vxmd2\" (UID: \"49c5d423-1095-46d6-9054-a1957402fd7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.514583 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49c5d423-1095-46d6-9054-a1957402fd7e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vxmd2\" (UID: \"49c5d423-1095-46d6-9054-a1957402fd7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.616981 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49c5d423-1095-46d6-9054-a1957402fd7e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vxmd2\" (UID: \"49c5d423-1095-46d6-9054-a1957402fd7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.617068 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49c5d423-1095-46d6-9054-a1957402fd7e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vxmd2\" (UID: \"49c5d423-1095-46d6-9054-a1957402fd7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.617226 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc64h\" (UniqueName: \"kubernetes.io/projected/49c5d423-1095-46d6-9054-a1957402fd7e-kube-api-access-fc64h\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vxmd2\" (UID: \"49c5d423-1095-46d6-9054-a1957402fd7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.621956 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49c5d423-1095-46d6-9054-a1957402fd7e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vxmd2\" (UID: \"49c5d423-1095-46d6-9054-a1957402fd7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.622681 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49c5d423-1095-46d6-9054-a1957402fd7e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vxmd2\" (UID: \"49c5d423-1095-46d6-9054-a1957402fd7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.638695 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc64h\" (UniqueName: \"kubernetes.io/projected/49c5d423-1095-46d6-9054-a1957402fd7e-kube-api-access-fc64h\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vxmd2\" (UID: \"49c5d423-1095-46d6-9054-a1957402fd7e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" Nov 24 11:51:23 crc kubenswrapper[4678]: I1124 11:51:23.793171 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" Nov 24 11:51:24 crc kubenswrapper[4678]: I1124 11:51:24.332488 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 11:51:24 crc kubenswrapper[4678]: I1124 11:51:24.394283 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 11:51:24 crc kubenswrapper[4678]: W1124 11:51:24.495291 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49c5d423_1095_46d6_9054_a1957402fd7e.slice/crio-e4a5878f45e67d63815dcd9a0302eb4a5abdd59502c4e3bcefd3d5120a0d8297 WatchSource:0}: Error finding container e4a5878f45e67d63815dcd9a0302eb4a5abdd59502c4e3bcefd3d5120a0d8297: Status 404 returned error can't find the container with id e4a5878f45e67d63815dcd9a0302eb4a5abdd59502c4e3bcefd3d5120a0d8297 Nov 24 11:51:24 crc kubenswrapper[4678]: I1124 11:51:24.498657 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2"] Nov 24 11:51:24 crc kubenswrapper[4678]: I1124 11:51:24.921171 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gl86d"] Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.105033 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xtp9r"] Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.105975 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xtp9r" podUID="26ff8c7f-bc62-4204-b23d-4e6844c3d3c1" containerName="registry-server" containerID="cri-o://b24e288520beb31f463a32f7cd82ca5d335370a81343596549773e4278ce6737" gracePeriod=2 Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.423834 4678 generic.go:334] "Generic (PLEG): container finished" podID="26ff8c7f-bc62-4204-b23d-4e6844c3d3c1" containerID="b24e288520beb31f463a32f7cd82ca5d335370a81343596549773e4278ce6737" exitCode=0 Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.424366 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtp9r" event={"ID":"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1","Type":"ContainerDied","Data":"b24e288520beb31f463a32f7cd82ca5d335370a81343596549773e4278ce6737"} Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.430899 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.430937 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:25 crc kubenswrapper[4678]: E1124 11:51:25.439101 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b24e288520beb31f463a32f7cd82ca5d335370a81343596549773e4278ce6737 is running failed: container process not found" containerID="b24e288520beb31f463a32f7cd82ca5d335370a81343596549773e4278ce6737" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 11:51:25 crc kubenswrapper[4678]: E1124 11:51:25.448082 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b24e288520beb31f463a32f7cd82ca5d335370a81343596549773e4278ce6737 is running failed: container process not found" containerID="b24e288520beb31f463a32f7cd82ca5d335370a81343596549773e4278ce6737" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.448624 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" event={"ID":"49c5d423-1095-46d6-9054-a1957402fd7e","Type":"ContainerStarted","Data":"954bddc794404951f114d9f495f035f979e9a0907c9d915f217d176d78684ba4"} Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.448705 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" event={"ID":"49c5d423-1095-46d6-9054-a1957402fd7e","Type":"ContainerStarted","Data":"e4a5878f45e67d63815dcd9a0302eb4a5abdd59502c4e3bcefd3d5120a0d8297"} Nov 24 11:51:25 crc kubenswrapper[4678]: E1124 11:51:25.448995 4678 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b24e288520beb31f463a32f7cd82ca5d335370a81343596549773e4278ce6737 is running failed: container process not found" containerID="b24e288520beb31f463a32f7cd82ca5d335370a81343596549773e4278ce6737" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 11:51:25 crc kubenswrapper[4678]: E1124 11:51:25.449040 4678 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b24e288520beb31f463a32f7cd82ca5d335370a81343596549773e4278ce6737 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-xtp9r" podUID="26ff8c7f-bc62-4204-b23d-4e6844c3d3c1" containerName="registry-server" Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.476996 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" podStartSLOduration=2.050485975 podStartE2EDuration="2.476977567s" podCreationTimestamp="2025-11-24 11:51:23 +0000 UTC" firstStartedPulling="2025-11-24 11:51:24.498698068 +0000 UTC m=+2095.429757707" lastFinishedPulling="2025-11-24 11:51:24.92518966 +0000 UTC m=+2095.856249299" observedRunningTime="2025-11-24 11:51:25.465929311 +0000 UTC m=+2096.396988950" watchObservedRunningTime="2025-11-24 11:51:25.476977567 +0000 UTC m=+2096.408037206" Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.530092 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.716454 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.721653 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnkpb\" (UniqueName: \"kubernetes.io/projected/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-kube-api-access-gnkpb\") pod \"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1\" (UID: \"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1\") " Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.721774 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-catalog-content\") pod \"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1\" (UID: \"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1\") " Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.721804 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-utilities\") pod \"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1\" (UID: \"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1\") " Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.730253 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-kube-api-access-gnkpb" (OuterVolumeSpecName: "kube-api-access-gnkpb") pod "26ff8c7f-bc62-4204-b23d-4e6844c3d3c1" (UID: "26ff8c7f-bc62-4204-b23d-4e6844c3d3c1"). InnerVolumeSpecName "kube-api-access-gnkpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.747139 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-utilities" (OuterVolumeSpecName: "utilities") pod "26ff8c7f-bc62-4204-b23d-4e6844c3d3c1" (UID: "26ff8c7f-bc62-4204-b23d-4e6844c3d3c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.824178 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnkpb\" (UniqueName: \"kubernetes.io/projected/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-kube-api-access-gnkpb\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.824477 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.858702 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26ff8c7f-bc62-4204-b23d-4e6844c3d3c1" (UID: "26ff8c7f-bc62-4204-b23d-4e6844c3d3c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:51:25 crc kubenswrapper[4678]: I1124 11:51:25.926344 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:26 crc kubenswrapper[4678]: I1124 11:51:26.459514 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtp9r" event={"ID":"26ff8c7f-bc62-4204-b23d-4e6844c3d3c1","Type":"ContainerDied","Data":"014dee6956b229e47cbf822fcf0142daff99d9601d61f9d56aad2bf20be30326"} Nov 24 11:51:26 crc kubenswrapper[4678]: I1124 11:51:26.459900 4678 scope.go:117] "RemoveContainer" containerID="b24e288520beb31f463a32f7cd82ca5d335370a81343596549773e4278ce6737" Nov 24 11:51:26 crc kubenswrapper[4678]: I1124 11:51:26.459600 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtp9r" Nov 24 11:51:26 crc kubenswrapper[4678]: I1124 11:51:26.493357 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xtp9r"] Nov 24 11:51:26 crc kubenswrapper[4678]: I1124 11:51:26.497732 4678 scope.go:117] "RemoveContainer" containerID="32959bcad638bd8b4a1c90451214281c9f1ed4a1e8afb3c3c7b639a523ca0e26" Nov 24 11:51:26 crc kubenswrapper[4678]: I1124 11:51:26.507116 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xtp9r"] Nov 24 11:51:26 crc kubenswrapper[4678]: I1124 11:51:26.520062 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:26 crc kubenswrapper[4678]: I1124 11:51:26.546187 4678 scope.go:117] "RemoveContainer" containerID="845357932fcd26ca283abd0054b6ae298c07ffab09d7a16440d3a0a1bee6d16e" Nov 24 11:51:26 crc kubenswrapper[4678]: I1124 11:51:26.709132 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:51:26 crc kubenswrapper[4678]: I1124 11:51:26.773895 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:51:27 crc kubenswrapper[4678]: I1124 11:51:27.954041 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ff8c7f-bc62-4204-b23d-4e6844c3d3c1" path="/var/lib/kubelet/pods/26ff8c7f-bc62-4204-b23d-4e6844c3d3c1/volumes" Nov 24 11:51:27 crc kubenswrapper[4678]: I1124 11:51:27.955448 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xrj5c"] Nov 24 11:51:28 crc kubenswrapper[4678]: I1124 11:51:28.482585 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xrj5c" podUID="90dd0a42-3c18-4e1a-a1d3-8c2ae1770442" containerName="registry-server" containerID="cri-o://9e2524cbe62f0f22f0e10c5317fb0ef4e9a3952abb9a87f82d7520dab9458381" gracePeriod=2 Nov 24 11:51:28 crc kubenswrapper[4678]: I1124 11:51:28.511505 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tm4qh"] Nov 24 11:51:28 crc kubenswrapper[4678]: I1124 11:51:28.511825 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tm4qh" podUID="33e64aad-d54d-471b-9e1a-622e63ea3598" containerName="registry-server" containerID="cri-o://ad403884765dda45dafcea9a6bce9f7d2a5d2ab64f4f08baec72f455771d2c0e" gracePeriod=2 Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.273631 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.425082 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zptz8\" (UniqueName: \"kubernetes.io/projected/33e64aad-d54d-471b-9e1a-622e63ea3598-kube-api-access-zptz8\") pod \"33e64aad-d54d-471b-9e1a-622e63ea3598\" (UID: \"33e64aad-d54d-471b-9e1a-622e63ea3598\") " Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.425374 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33e64aad-d54d-471b-9e1a-622e63ea3598-catalog-content\") pod \"33e64aad-d54d-471b-9e1a-622e63ea3598\" (UID: \"33e64aad-d54d-471b-9e1a-622e63ea3598\") " Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.425520 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33e64aad-d54d-471b-9e1a-622e63ea3598-utilities\") pod \"33e64aad-d54d-471b-9e1a-622e63ea3598\" (UID: \"33e64aad-d54d-471b-9e1a-622e63ea3598\") " Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.427205 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33e64aad-d54d-471b-9e1a-622e63ea3598-utilities" (OuterVolumeSpecName: "utilities") pod "33e64aad-d54d-471b-9e1a-622e63ea3598" (UID: "33e64aad-d54d-471b-9e1a-622e63ea3598"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.467216 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33e64aad-d54d-471b-9e1a-622e63ea3598-kube-api-access-zptz8" (OuterVolumeSpecName: "kube-api-access-zptz8") pod "33e64aad-d54d-471b-9e1a-622e63ea3598" (UID: "33e64aad-d54d-471b-9e1a-622e63ea3598"). InnerVolumeSpecName "kube-api-access-zptz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.527758 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33e64aad-d54d-471b-9e1a-622e63ea3598-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33e64aad-d54d-471b-9e1a-622e63ea3598" (UID: "33e64aad-d54d-471b-9e1a-622e63ea3598"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.531774 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33e64aad-d54d-471b-9e1a-622e63ea3598-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.531897 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zptz8\" (UniqueName: \"kubernetes.io/projected/33e64aad-d54d-471b-9e1a-622e63ea3598-kube-api-access-zptz8\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.531961 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33e64aad-d54d-471b-9e1a-622e63ea3598-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.544926 4678 generic.go:334] "Generic (PLEG): container finished" podID="90dd0a42-3c18-4e1a-a1d3-8c2ae1770442" containerID="9e2524cbe62f0f22f0e10c5317fb0ef4e9a3952abb9a87f82d7520dab9458381" exitCode=0 Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.545053 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrj5c" event={"ID":"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442","Type":"ContainerDied","Data":"9e2524cbe62f0f22f0e10c5317fb0ef4e9a3952abb9a87f82d7520dab9458381"} Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.551294 4678 generic.go:334] "Generic (PLEG): container finished" podID="33e64aad-d54d-471b-9e1a-622e63ea3598" containerID="ad403884765dda45dafcea9a6bce9f7d2a5d2ab64f4f08baec72f455771d2c0e" exitCode=0 Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.551406 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm4qh" event={"ID":"33e64aad-d54d-471b-9e1a-622e63ea3598","Type":"ContainerDied","Data":"ad403884765dda45dafcea9a6bce9f7d2a5d2ab64f4f08baec72f455771d2c0e"} Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.551486 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm4qh" event={"ID":"33e64aad-d54d-471b-9e1a-622e63ea3598","Type":"ContainerDied","Data":"76e0ad4f2e083a3fd9d2070fabe24c8d11638baaa144b1e449fcd64c07cc3d58"} Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.551550 4678 scope.go:117] "RemoveContainer" containerID="ad403884765dda45dafcea9a6bce9f7d2a5d2ab64f4f08baec72f455771d2c0e" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.551781 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tm4qh" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.574033 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.627687 4678 scope.go:117] "RemoveContainer" containerID="a4885d92bd7c04ec6120190f04cc7e98bfbc752b2807a17fc592f1e62af9bde3" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.683794 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tm4qh"] Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.707160 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tm4qh"] Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.713905 4678 scope.go:117] "RemoveContainer" containerID="baa21fcdc29929d14097e23814cf435aaeaa334d3ac125d77c2ac91e68f4f99d" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.742203 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-utilities\") pod \"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442\" (UID: \"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442\") " Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.742390 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-catalog-content\") pod \"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442\" (UID: \"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442\") " Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.742464 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jl9z\" (UniqueName: \"kubernetes.io/projected/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-kube-api-access-7jl9z\") pod \"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442\" (UID: \"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442\") " Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.743134 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-utilities" (OuterVolumeSpecName: "utilities") pod "90dd0a42-3c18-4e1a-a1d3-8c2ae1770442" (UID: "90dd0a42-3c18-4e1a-a1d3-8c2ae1770442"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.743588 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.747159 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-kube-api-access-7jl9z" (OuterVolumeSpecName: "kube-api-access-7jl9z") pod "90dd0a42-3c18-4e1a-a1d3-8c2ae1770442" (UID: "90dd0a42-3c18-4e1a-a1d3-8c2ae1770442"). InnerVolumeSpecName "kube-api-access-7jl9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.818163 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90dd0a42-3c18-4e1a-a1d3-8c2ae1770442" (UID: "90dd0a42-3c18-4e1a-a1d3-8c2ae1770442"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.846421 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.846458 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jl9z\" (UniqueName: \"kubernetes.io/projected/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442-kube-api-access-7jl9z\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.863486 4678 scope.go:117] "RemoveContainer" containerID="ad403884765dda45dafcea9a6bce9f7d2a5d2ab64f4f08baec72f455771d2c0e" Nov 24 11:51:29 crc kubenswrapper[4678]: E1124 11:51:29.863911 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad403884765dda45dafcea9a6bce9f7d2a5d2ab64f4f08baec72f455771d2c0e\": container with ID starting with ad403884765dda45dafcea9a6bce9f7d2a5d2ab64f4f08baec72f455771d2c0e not found: ID does not exist" containerID="ad403884765dda45dafcea9a6bce9f7d2a5d2ab64f4f08baec72f455771d2c0e" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.863942 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad403884765dda45dafcea9a6bce9f7d2a5d2ab64f4f08baec72f455771d2c0e"} err="failed to get container status \"ad403884765dda45dafcea9a6bce9f7d2a5d2ab64f4f08baec72f455771d2c0e\": rpc error: code = NotFound desc = could not find container \"ad403884765dda45dafcea9a6bce9f7d2a5d2ab64f4f08baec72f455771d2c0e\": container with ID starting with ad403884765dda45dafcea9a6bce9f7d2a5d2ab64f4f08baec72f455771d2c0e not found: ID does not exist" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.863979 4678 scope.go:117] "RemoveContainer" containerID="a4885d92bd7c04ec6120190f04cc7e98bfbc752b2807a17fc592f1e62af9bde3" Nov 24 11:51:29 crc kubenswrapper[4678]: E1124 11:51:29.864281 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4885d92bd7c04ec6120190f04cc7e98bfbc752b2807a17fc592f1e62af9bde3\": container with ID starting with a4885d92bd7c04ec6120190f04cc7e98bfbc752b2807a17fc592f1e62af9bde3 not found: ID does not exist" containerID="a4885d92bd7c04ec6120190f04cc7e98bfbc752b2807a17fc592f1e62af9bde3" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.864311 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4885d92bd7c04ec6120190f04cc7e98bfbc752b2807a17fc592f1e62af9bde3"} err="failed to get container status \"a4885d92bd7c04ec6120190f04cc7e98bfbc752b2807a17fc592f1e62af9bde3\": rpc error: code = NotFound desc = could not find container \"a4885d92bd7c04ec6120190f04cc7e98bfbc752b2807a17fc592f1e62af9bde3\": container with ID starting with a4885d92bd7c04ec6120190f04cc7e98bfbc752b2807a17fc592f1e62af9bde3 not found: ID does not exist" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.864328 4678 scope.go:117] "RemoveContainer" containerID="baa21fcdc29929d14097e23814cf435aaeaa334d3ac125d77c2ac91e68f4f99d" Nov 24 11:51:29 crc kubenswrapper[4678]: E1124 11:51:29.864520 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baa21fcdc29929d14097e23814cf435aaeaa334d3ac125d77c2ac91e68f4f99d\": container with ID starting with baa21fcdc29929d14097e23814cf435aaeaa334d3ac125d77c2ac91e68f4f99d not found: ID does not exist" containerID="baa21fcdc29929d14097e23814cf435aaeaa334d3ac125d77c2ac91e68f4f99d" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.864540 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baa21fcdc29929d14097e23814cf435aaeaa334d3ac125d77c2ac91e68f4f99d"} err="failed to get container status \"baa21fcdc29929d14097e23814cf435aaeaa334d3ac125d77c2ac91e68f4f99d\": rpc error: code = NotFound desc = could not find container \"baa21fcdc29929d14097e23814cf435aaeaa334d3ac125d77c2ac91e68f4f99d\": container with ID starting with baa21fcdc29929d14097e23814cf435aaeaa334d3ac125d77c2ac91e68f4f99d not found: ID does not exist" Nov 24 11:51:29 crc kubenswrapper[4678]: I1124 11:51:29.941913 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33e64aad-d54d-471b-9e1a-622e63ea3598" path="/var/lib/kubelet/pods/33e64aad-d54d-471b-9e1a-622e63ea3598/volumes" Nov 24 11:51:30 crc kubenswrapper[4678]: I1124 11:51:30.297168 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:51:30 crc kubenswrapper[4678]: I1124 11:51:30.297535 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:51:30 crc kubenswrapper[4678]: I1124 11:51:30.565106 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrj5c" event={"ID":"90dd0a42-3c18-4e1a-a1d3-8c2ae1770442","Type":"ContainerDied","Data":"46f8f791697f294e3295d5096b45b2aed5e392408a5731747649b05d19855d5a"} Nov 24 11:51:30 crc kubenswrapper[4678]: I1124 11:51:30.565186 4678 scope.go:117] "RemoveContainer" containerID="9e2524cbe62f0f22f0e10c5317fb0ef4e9a3952abb9a87f82d7520dab9458381" Nov 24 11:51:30 crc kubenswrapper[4678]: I1124 11:51:30.565126 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xrj5c" Nov 24 11:51:30 crc kubenswrapper[4678]: I1124 11:51:30.601845 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xrj5c"] Nov 24 11:51:30 crc kubenswrapper[4678]: I1124 11:51:30.602015 4678 scope.go:117] "RemoveContainer" containerID="531efd2ce6945e86c8a7cd0b712eb126b8a53a39f0a2bc1ffa384d44a9ae2136" Nov 24 11:51:30 crc kubenswrapper[4678]: I1124 11:51:30.613330 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xrj5c"] Nov 24 11:51:30 crc kubenswrapper[4678]: I1124 11:51:30.636371 4678 scope.go:117] "RemoveContainer" containerID="df2ff05da0206859fe7f3222a27e5535213a096ae79844bcc25694c23682cd1d" Nov 24 11:51:31 crc kubenswrapper[4678]: I1124 11:51:31.911549 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90dd0a42-3c18-4e1a-a1d3-8c2ae1770442" path="/var/lib/kubelet/pods/90dd0a42-3c18-4e1a-a1d3-8c2ae1770442/volumes" Nov 24 11:51:34 crc kubenswrapper[4678]: I1124 11:51:34.614537 4678 generic.go:334] "Generic (PLEG): container finished" podID="49c5d423-1095-46d6-9054-a1957402fd7e" containerID="954bddc794404951f114d9f495f035f979e9a0907c9d915f217d176d78684ba4" exitCode=0 Nov 24 11:51:34 crc kubenswrapper[4678]: I1124 11:51:34.614745 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" event={"ID":"49c5d423-1095-46d6-9054-a1957402fd7e","Type":"ContainerDied","Data":"954bddc794404951f114d9f495f035f979e9a0907c9d915f217d176d78684ba4"} Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.145189 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.217255 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49c5d423-1095-46d6-9054-a1957402fd7e-inventory\") pod \"49c5d423-1095-46d6-9054-a1957402fd7e\" (UID: \"49c5d423-1095-46d6-9054-a1957402fd7e\") " Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.217320 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49c5d423-1095-46d6-9054-a1957402fd7e-ssh-key\") pod \"49c5d423-1095-46d6-9054-a1957402fd7e\" (UID: \"49c5d423-1095-46d6-9054-a1957402fd7e\") " Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.217534 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc64h\" (UniqueName: \"kubernetes.io/projected/49c5d423-1095-46d6-9054-a1957402fd7e-kube-api-access-fc64h\") pod \"49c5d423-1095-46d6-9054-a1957402fd7e\" (UID: \"49c5d423-1095-46d6-9054-a1957402fd7e\") " Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.223190 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c5d423-1095-46d6-9054-a1957402fd7e-kube-api-access-fc64h" (OuterVolumeSpecName: "kube-api-access-fc64h") pod "49c5d423-1095-46d6-9054-a1957402fd7e" (UID: "49c5d423-1095-46d6-9054-a1957402fd7e"). InnerVolumeSpecName "kube-api-access-fc64h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:51:36 crc kubenswrapper[4678]: E1124 11:51:36.246337 4678 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49c5d423-1095-46d6-9054-a1957402fd7e-ssh-key podName:49c5d423-1095-46d6-9054-a1957402fd7e nodeName:}" failed. No retries permitted until 2025-11-24 11:51:36.746302122 +0000 UTC m=+2107.677361771 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key" (UniqueName: "kubernetes.io/secret/49c5d423-1095-46d6-9054-a1957402fd7e-ssh-key") pod "49c5d423-1095-46d6-9054-a1957402fd7e" (UID: "49c5d423-1095-46d6-9054-a1957402fd7e") : error deleting /var/lib/kubelet/pods/49c5d423-1095-46d6-9054-a1957402fd7e/volume-subpaths: remove /var/lib/kubelet/pods/49c5d423-1095-46d6-9054-a1957402fd7e/volume-subpaths: no such file or directory Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.249419 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c5d423-1095-46d6-9054-a1957402fd7e-inventory" (OuterVolumeSpecName: "inventory") pod "49c5d423-1095-46d6-9054-a1957402fd7e" (UID: "49c5d423-1095-46d6-9054-a1957402fd7e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.320236 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc64h\" (UniqueName: \"kubernetes.io/projected/49c5d423-1095-46d6-9054-a1957402fd7e-kube-api-access-fc64h\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.320283 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49c5d423-1095-46d6-9054-a1957402fd7e-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.637463 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" event={"ID":"49c5d423-1095-46d6-9054-a1957402fd7e","Type":"ContainerDied","Data":"e4a5878f45e67d63815dcd9a0302eb4a5abdd59502c4e3bcefd3d5120a0d8297"} Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.637521 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4a5878f45e67d63815dcd9a0302eb4a5abdd59502c4e3bcefd3d5120a0d8297" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.637592 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vxmd2" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.721526 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7"] Nov 24 11:51:36 crc kubenswrapper[4678]: E1124 11:51:36.722036 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90dd0a42-3c18-4e1a-a1d3-8c2ae1770442" containerName="extract-content" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.722052 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="90dd0a42-3c18-4e1a-a1d3-8c2ae1770442" containerName="extract-content" Nov 24 11:51:36 crc kubenswrapper[4678]: E1124 11:51:36.722071 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ff8c7f-bc62-4204-b23d-4e6844c3d3c1" containerName="registry-server" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.722078 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ff8c7f-bc62-4204-b23d-4e6844c3d3c1" containerName="registry-server" Nov 24 11:51:36 crc kubenswrapper[4678]: E1124 11:51:36.722097 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49c5d423-1095-46d6-9054-a1957402fd7e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.722105 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="49c5d423-1095-46d6-9054-a1957402fd7e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:51:36 crc kubenswrapper[4678]: E1124 11:51:36.722119 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e64aad-d54d-471b-9e1a-622e63ea3598" containerName="extract-content" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.722124 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e64aad-d54d-471b-9e1a-622e63ea3598" containerName="extract-content" Nov 24 11:51:36 crc kubenswrapper[4678]: E1124 11:51:36.722142 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90dd0a42-3c18-4e1a-a1d3-8c2ae1770442" containerName="registry-server" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.722148 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="90dd0a42-3c18-4e1a-a1d3-8c2ae1770442" containerName="registry-server" Nov 24 11:51:36 crc kubenswrapper[4678]: E1124 11:51:36.722160 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e64aad-d54d-471b-9e1a-622e63ea3598" containerName="extract-utilities" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.722165 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e64aad-d54d-471b-9e1a-622e63ea3598" containerName="extract-utilities" Nov 24 11:51:36 crc kubenswrapper[4678]: E1124 11:51:36.722180 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e64aad-d54d-471b-9e1a-622e63ea3598" containerName="registry-server" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.722186 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e64aad-d54d-471b-9e1a-622e63ea3598" containerName="registry-server" Nov 24 11:51:36 crc kubenswrapper[4678]: E1124 11:51:36.722208 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90dd0a42-3c18-4e1a-a1d3-8c2ae1770442" containerName="extract-utilities" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.722216 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="90dd0a42-3c18-4e1a-a1d3-8c2ae1770442" containerName="extract-utilities" Nov 24 11:51:36 crc kubenswrapper[4678]: E1124 11:51:36.722226 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ff8c7f-bc62-4204-b23d-4e6844c3d3c1" containerName="extract-utilities" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.722232 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ff8c7f-bc62-4204-b23d-4e6844c3d3c1" containerName="extract-utilities" Nov 24 11:51:36 crc kubenswrapper[4678]: E1124 11:51:36.722241 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ff8c7f-bc62-4204-b23d-4e6844c3d3c1" containerName="extract-content" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.722246 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ff8c7f-bc62-4204-b23d-4e6844c3d3c1" containerName="extract-content" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.722449 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e64aad-d54d-471b-9e1a-622e63ea3598" containerName="registry-server" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.722470 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ff8c7f-bc62-4204-b23d-4e6844c3d3c1" containerName="registry-server" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.722500 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="49c5d423-1095-46d6-9054-a1957402fd7e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.722514 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="90dd0a42-3c18-4e1a-a1d3-8c2ae1770442" containerName="registry-server" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.723345 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.734954 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7"] Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.832832 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49c5d423-1095-46d6-9054-a1957402fd7e-ssh-key\") pod \"49c5d423-1095-46d6-9054-a1957402fd7e\" (UID: \"49c5d423-1095-46d6-9054-a1957402fd7e\") " Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.833548 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5826f176-5b24-4f37-93db-b8ab73e42443-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7\" (UID: \"5826f176-5b24-4f37-93db-b8ab73e42443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.833712 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5826f176-5b24-4f37-93db-b8ab73e42443-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7\" (UID: \"5826f176-5b24-4f37-93db-b8ab73e42443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.833889 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6tnz\" (UniqueName: \"kubernetes.io/projected/5826f176-5b24-4f37-93db-b8ab73e42443-kube-api-access-l6tnz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7\" (UID: \"5826f176-5b24-4f37-93db-b8ab73e42443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.838021 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c5d423-1095-46d6-9054-a1957402fd7e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "49c5d423-1095-46d6-9054-a1957402fd7e" (UID: "49c5d423-1095-46d6-9054-a1957402fd7e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.935932 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5826f176-5b24-4f37-93db-b8ab73e42443-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7\" (UID: \"5826f176-5b24-4f37-93db-b8ab73e42443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.936123 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6tnz\" (UniqueName: \"kubernetes.io/projected/5826f176-5b24-4f37-93db-b8ab73e42443-kube-api-access-l6tnz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7\" (UID: \"5826f176-5b24-4f37-93db-b8ab73e42443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.936182 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5826f176-5b24-4f37-93db-b8ab73e42443-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7\" (UID: \"5826f176-5b24-4f37-93db-b8ab73e42443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.936252 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49c5d423-1095-46d6-9054-a1957402fd7e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.940777 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5826f176-5b24-4f37-93db-b8ab73e42443-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7\" (UID: \"5826f176-5b24-4f37-93db-b8ab73e42443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.949286 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5826f176-5b24-4f37-93db-b8ab73e42443-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7\" (UID: \"5826f176-5b24-4f37-93db-b8ab73e42443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" Nov 24 11:51:36 crc kubenswrapper[4678]: I1124 11:51:36.954042 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6tnz\" (UniqueName: \"kubernetes.io/projected/5826f176-5b24-4f37-93db-b8ab73e42443-kube-api-access-l6tnz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7\" (UID: \"5826f176-5b24-4f37-93db-b8ab73e42443\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" Nov 24 11:51:37 crc kubenswrapper[4678]: I1124 11:51:37.049297 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" Nov 24 11:51:37 crc kubenswrapper[4678]: I1124 11:51:37.555771 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7"] Nov 24 11:51:37 crc kubenswrapper[4678]: I1124 11:51:37.662865 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" event={"ID":"5826f176-5b24-4f37-93db-b8ab73e42443","Type":"ContainerStarted","Data":"df3cb721917f80fe308fce057c7b2b222c51fea8a60def91b005473f6d97d12f"} Nov 24 11:51:38 crc kubenswrapper[4678]: I1124 11:51:38.676348 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" event={"ID":"5826f176-5b24-4f37-93db-b8ab73e42443","Type":"ContainerStarted","Data":"eca544f99d9f83c7d6645482b21e558be36efb1ea99182e9b01616ec63a7bbe7"} Nov 24 11:51:38 crc kubenswrapper[4678]: I1124 11:51:38.707318 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" podStartSLOduration=2.288113482 podStartE2EDuration="2.707299257s" podCreationTimestamp="2025-11-24 11:51:36 +0000 UTC" firstStartedPulling="2025-11-24 11:51:37.561411281 +0000 UTC m=+2108.492470920" lastFinishedPulling="2025-11-24 11:51:37.980597056 +0000 UTC m=+2108.911656695" observedRunningTime="2025-11-24 11:51:38.701132903 +0000 UTC m=+2109.632192552" watchObservedRunningTime="2025-11-24 11:51:38.707299257 +0000 UTC m=+2109.638358896" Nov 24 11:51:47 crc kubenswrapper[4678]: I1124 11:51:47.051207 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-crx7v"] Nov 24 11:51:47 crc kubenswrapper[4678]: I1124 11:51:47.071465 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-crx7v"] Nov 24 11:51:47 crc kubenswrapper[4678]: I1124 11:51:47.770868 4678 generic.go:334] "Generic (PLEG): container finished" podID="5826f176-5b24-4f37-93db-b8ab73e42443" containerID="eca544f99d9f83c7d6645482b21e558be36efb1ea99182e9b01616ec63a7bbe7" exitCode=0 Nov 24 11:51:47 crc kubenswrapper[4678]: I1124 11:51:47.770964 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" event={"ID":"5826f176-5b24-4f37-93db-b8ab73e42443","Type":"ContainerDied","Data":"eca544f99d9f83c7d6645482b21e558be36efb1ea99182e9b01616ec63a7bbe7"} Nov 24 11:51:47 crc kubenswrapper[4678]: I1124 11:51:47.914810 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e7fab76-c5f4-450f-be9b-d433395cbcf3" path="/var/lib/kubelet/pods/7e7fab76-c5f4-450f-be9b-d433395cbcf3/volumes" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.325196 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.472933 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5826f176-5b24-4f37-93db-b8ab73e42443-ssh-key\") pod \"5826f176-5b24-4f37-93db-b8ab73e42443\" (UID: \"5826f176-5b24-4f37-93db-b8ab73e42443\") " Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.473291 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5826f176-5b24-4f37-93db-b8ab73e42443-inventory\") pod \"5826f176-5b24-4f37-93db-b8ab73e42443\" (UID: \"5826f176-5b24-4f37-93db-b8ab73e42443\") " Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.473529 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6tnz\" (UniqueName: \"kubernetes.io/projected/5826f176-5b24-4f37-93db-b8ab73e42443-kube-api-access-l6tnz\") pod \"5826f176-5b24-4f37-93db-b8ab73e42443\" (UID: \"5826f176-5b24-4f37-93db-b8ab73e42443\") " Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.480222 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5826f176-5b24-4f37-93db-b8ab73e42443-kube-api-access-l6tnz" (OuterVolumeSpecName: "kube-api-access-l6tnz") pod "5826f176-5b24-4f37-93db-b8ab73e42443" (UID: "5826f176-5b24-4f37-93db-b8ab73e42443"). InnerVolumeSpecName "kube-api-access-l6tnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.507065 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5826f176-5b24-4f37-93db-b8ab73e42443-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5826f176-5b24-4f37-93db-b8ab73e42443" (UID: "5826f176-5b24-4f37-93db-b8ab73e42443"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.507744 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5826f176-5b24-4f37-93db-b8ab73e42443-inventory" (OuterVolumeSpecName: "inventory") pod "5826f176-5b24-4f37-93db-b8ab73e42443" (UID: "5826f176-5b24-4f37-93db-b8ab73e42443"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.576205 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6tnz\" (UniqueName: \"kubernetes.io/projected/5826f176-5b24-4f37-93db-b8ab73e42443-kube-api-access-l6tnz\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.576245 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5826f176-5b24-4f37-93db-b8ab73e42443-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.576257 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5826f176-5b24-4f37-93db-b8ab73e42443-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.801915 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" event={"ID":"5826f176-5b24-4f37-93db-b8ab73e42443","Type":"ContainerDied","Data":"df3cb721917f80fe308fce057c7b2b222c51fea8a60def91b005473f6d97d12f"} Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.801974 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df3cb721917f80fe308fce057c7b2b222c51fea8a60def91b005473f6d97d12f" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.802018 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.920213 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj"] Nov 24 11:51:49 crc kubenswrapper[4678]: E1124 11:51:49.921942 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5826f176-5b24-4f37-93db-b8ab73e42443" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.922102 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5826f176-5b24-4f37-93db-b8ab73e42443" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.922689 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="5826f176-5b24-4f37-93db-b8ab73e42443" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.924317 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.928561 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.929016 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.929161 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.929335 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.929455 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.930559 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.930847 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.930970 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.931251 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.932653 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj"] Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.987323 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.987485 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.987537 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.987575 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.987600 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.987683 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.987720 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.987821 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.987872 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.987909 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.987956 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.987980 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.988023 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.988110 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.988135 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf4l2\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-kube-api-access-wf4l2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:49 crc kubenswrapper[4678]: I1124 11:51:49.988194 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.090301 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.090388 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.090426 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.090454 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.090509 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.090535 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.090570 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.090618 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.090635 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf4l2\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-kube-api-access-wf4l2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.090729 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.090781 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.090866 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.090913 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.090933 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.090951 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.090975 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.095965 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.096271 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.097746 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.098606 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.099166 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.099422 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.099443 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.100081 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.100657 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.100913 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.101118 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.101595 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.101734 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.102132 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.109519 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf4l2\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-kube-api-access-wf4l2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.110961 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-frcbj\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.256080 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:51:50 crc kubenswrapper[4678]: I1124 11:51:50.822996 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj"] Nov 24 11:51:50 crc kubenswrapper[4678]: W1124 11:51:50.823955 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35879489_c790_4b02_abb6_da023eef4eac.slice/crio-e7100fa84dd71655d944a257468a51dcd57bf170fa86c3021e8844e04677e047 WatchSource:0}: Error finding container e7100fa84dd71655d944a257468a51dcd57bf170fa86c3021e8844e04677e047: Status 404 returned error can't find the container with id e7100fa84dd71655d944a257468a51dcd57bf170fa86c3021e8844e04677e047 Nov 24 11:51:51 crc kubenswrapper[4678]: I1124 11:51:51.822628 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" event={"ID":"35879489-c790-4b02-abb6-da023eef4eac","Type":"ContainerStarted","Data":"e7100fa84dd71655d944a257468a51dcd57bf170fa86c3021e8844e04677e047"} Nov 24 11:51:52 crc kubenswrapper[4678]: I1124 11:51:52.835101 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" event={"ID":"35879489-c790-4b02-abb6-da023eef4eac","Type":"ContainerStarted","Data":"924244ab601954999f7e3950dcf312dafccc9f1874ad39b3a1f7e0f1c5c3686d"} Nov 24 11:51:52 crc kubenswrapper[4678]: I1124 11:51:52.860471 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" podStartSLOduration=3.029971193 podStartE2EDuration="3.860452823s" podCreationTimestamp="2025-11-24 11:51:49 +0000 UTC" firstStartedPulling="2025-11-24 11:51:50.8278822 +0000 UTC m=+2121.758941839" lastFinishedPulling="2025-11-24 11:51:51.65836383 +0000 UTC m=+2122.589423469" observedRunningTime="2025-11-24 11:51:52.858194622 +0000 UTC m=+2123.789254281" watchObservedRunningTime="2025-11-24 11:51:52.860452823 +0000 UTC m=+2123.791512482" Nov 24 11:52:00 crc kubenswrapper[4678]: I1124 11:52:00.297270 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:52:00 crc kubenswrapper[4678]: I1124 11:52:00.297870 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:52:30 crc kubenswrapper[4678]: I1124 11:52:30.070717 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-26ncd"] Nov 24 11:52:30 crc kubenswrapper[4678]: I1124 11:52:30.086054 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-26ncd"] Nov 24 11:52:30 crc kubenswrapper[4678]: I1124 11:52:30.296869 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 11:52:30 crc kubenswrapper[4678]: I1124 11:52:30.296992 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 11:52:30 crc kubenswrapper[4678]: I1124 11:52:30.297080 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 11:52:30 crc kubenswrapper[4678]: I1124 11:52:30.298917 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 11:52:30 crc kubenswrapper[4678]: I1124 11:52:30.299045 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" gracePeriod=600 Nov 24 11:52:30 crc kubenswrapper[4678]: E1124 11:52:30.425604 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:52:31 crc kubenswrapper[4678]: I1124 11:52:31.291001 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" exitCode=0 Nov 24 11:52:31 crc kubenswrapper[4678]: I1124 11:52:31.291343 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7"} Nov 24 11:52:31 crc kubenswrapper[4678]: I1124 11:52:31.291380 4678 scope.go:117] "RemoveContainer" containerID="07f7b4bf38854f595d8be8c0fa05f91ad02239dc235ff30184b0ce433099dc00" Nov 24 11:52:31 crc kubenswrapper[4678]: I1124 11:52:31.291893 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:52:31 crc kubenswrapper[4678]: E1124 11:52:31.292349 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:52:31 crc kubenswrapper[4678]: I1124 11:52:31.914944 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88740e07-191d-494a-bba6-3b0c5f3a9b12" path="/var/lib/kubelet/pods/88740e07-191d-494a-bba6-3b0c5f3a9b12/volumes" Nov 24 11:52:37 crc kubenswrapper[4678]: I1124 11:52:37.356995 4678 generic.go:334] "Generic (PLEG): container finished" podID="35879489-c790-4b02-abb6-da023eef4eac" containerID="924244ab601954999f7e3950dcf312dafccc9f1874ad39b3a1f7e0f1c5c3686d" exitCode=0 Nov 24 11:52:37 crc kubenswrapper[4678]: I1124 11:52:37.357116 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" event={"ID":"35879489-c790-4b02-abb6-da023eef4eac","Type":"ContainerDied","Data":"924244ab601954999f7e3950dcf312dafccc9f1874ad39b3a1f7e0f1c5c3686d"} Nov 24 11:52:37 crc kubenswrapper[4678]: I1124 11:52:37.401256 4678 scope.go:117] "RemoveContainer" containerID="91872e1803e0c560b726223508d87107413852ca81dd2e986a30f9909f7ac2d0" Nov 24 11:52:37 crc kubenswrapper[4678]: I1124 11:52:37.445946 4678 scope.go:117] "RemoveContainer" containerID="d01b5c489cb384dca56647b6e2d16298a540b316474a27be35c7bf253b578a54" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.846964 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.872192 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"35879489-c790-4b02-abb6-da023eef4eac\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.872322 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-ssh-key\") pod \"35879489-c790-4b02-abb6-da023eef4eac\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.872414 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-libvirt-combined-ca-bundle\") pod \"35879489-c790-4b02-abb6-da023eef4eac\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.872478 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-telemetry-combined-ca-bundle\") pod \"35879489-c790-4b02-abb6-da023eef4eac\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.872528 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"35879489-c790-4b02-abb6-da023eef4eac\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.872559 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-neutron-metadata-combined-ca-bundle\") pod \"35879489-c790-4b02-abb6-da023eef4eac\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.872601 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-ovn-combined-ca-bundle\") pod \"35879489-c790-4b02-abb6-da023eef4eac\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.872687 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"35879489-c790-4b02-abb6-da023eef4eac\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.872711 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-nova-combined-ca-bundle\") pod \"35879489-c790-4b02-abb6-da023eef4eac\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.872772 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-telemetry-power-monitoring-combined-ca-bundle\") pod \"35879489-c790-4b02-abb6-da023eef4eac\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.872830 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"35879489-c790-4b02-abb6-da023eef4eac\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.872861 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-ovn-default-certs-0\") pod \"35879489-c790-4b02-abb6-da023eef4eac\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.872920 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-bootstrap-combined-ca-bundle\") pod \"35879489-c790-4b02-abb6-da023eef4eac\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.872962 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf4l2\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-kube-api-access-wf4l2\") pod \"35879489-c790-4b02-abb6-da023eef4eac\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.872988 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-repo-setup-combined-ca-bundle\") pod \"35879489-c790-4b02-abb6-da023eef4eac\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.873220 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-inventory\") pod \"35879489-c790-4b02-abb6-da023eef4eac\" (UID: \"35879489-c790-4b02-abb6-da023eef4eac\") " Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.880815 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "35879489-c790-4b02-abb6-da023eef4eac" (UID: "35879489-c790-4b02-abb6-da023eef4eac"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.881402 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "35879489-c790-4b02-abb6-da023eef4eac" (UID: "35879489-c790-4b02-abb6-da023eef4eac"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.882745 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "35879489-c790-4b02-abb6-da023eef4eac" (UID: "35879489-c790-4b02-abb6-da023eef4eac"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.884540 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "35879489-c790-4b02-abb6-da023eef4eac" (UID: "35879489-c790-4b02-abb6-da023eef4eac"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.885208 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-kube-api-access-wf4l2" (OuterVolumeSpecName: "kube-api-access-wf4l2") pod "35879489-c790-4b02-abb6-da023eef4eac" (UID: "35879489-c790-4b02-abb6-da023eef4eac"). InnerVolumeSpecName "kube-api-access-wf4l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.887213 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "35879489-c790-4b02-abb6-da023eef4eac" (UID: "35879489-c790-4b02-abb6-da023eef4eac"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.887946 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "35879489-c790-4b02-abb6-da023eef4eac" (UID: "35879489-c790-4b02-abb6-da023eef4eac"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.887941 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "35879489-c790-4b02-abb6-da023eef4eac" (UID: "35879489-c790-4b02-abb6-da023eef4eac"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.888424 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "35879489-c790-4b02-abb6-da023eef4eac" (UID: "35879489-c790-4b02-abb6-da023eef4eac"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.889891 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "35879489-c790-4b02-abb6-da023eef4eac" (UID: "35879489-c790-4b02-abb6-da023eef4eac"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.890026 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "35879489-c790-4b02-abb6-da023eef4eac" (UID: "35879489-c790-4b02-abb6-da023eef4eac"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.890068 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "35879489-c790-4b02-abb6-da023eef4eac" (UID: "35879489-c790-4b02-abb6-da023eef4eac"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.890387 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "35879489-c790-4b02-abb6-da023eef4eac" (UID: "35879489-c790-4b02-abb6-da023eef4eac"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.894178 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "35879489-c790-4b02-abb6-da023eef4eac" (UID: "35879489-c790-4b02-abb6-da023eef4eac"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.922424 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "35879489-c790-4b02-abb6-da023eef4eac" (UID: "35879489-c790-4b02-abb6-da023eef4eac"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.923383 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-inventory" (OuterVolumeSpecName: "inventory") pod "35879489-c790-4b02-abb6-da023eef4eac" (UID: "35879489-c790-4b02-abb6-da023eef4eac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.977042 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.977089 4678 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.977102 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.977113 4678 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.977123 4678 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.977137 4678 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.977149 4678 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.977161 4678 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.977173 4678 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.977183 4678 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.977194 4678 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.977205 4678 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.977217 4678 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.977228 4678 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.977239 4678 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35879489-c790-4b02-abb6-da023eef4eac-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:52:38 crc kubenswrapper[4678]: I1124 11:52:38.977254 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf4l2\" (UniqueName: \"kubernetes.io/projected/35879489-c790-4b02-abb6-da023eef4eac-kube-api-access-wf4l2\") on node \"crc\" DevicePath \"\"" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.382808 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" event={"ID":"35879489-c790-4b02-abb6-da023eef4eac","Type":"ContainerDied","Data":"e7100fa84dd71655d944a257468a51dcd57bf170fa86c3021e8844e04677e047"} Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.382886 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7100fa84dd71655d944a257468a51dcd57bf170fa86c3021e8844e04677e047" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.382928 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-frcbj" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.508129 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh"] Nov 24 11:52:39 crc kubenswrapper[4678]: E1124 11:52:39.508772 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35879489-c790-4b02-abb6-da023eef4eac" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.508799 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="35879489-c790-4b02-abb6-da023eef4eac" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.509207 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="35879489-c790-4b02-abb6-da023eef4eac" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.510240 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.513825 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.513959 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.514088 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.514643 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.514897 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.537594 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh"] Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.590071 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jldv8\" (UniqueName: \"kubernetes.io/projected/85dc2c98-9e06-457d-85be-821a21514762-kube-api-access-jldv8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tprdh\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.590153 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tprdh\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.590182 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tprdh\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.590266 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tprdh\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.590298 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/85dc2c98-9e06-457d-85be-821a21514762-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tprdh\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.692334 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tprdh\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.692425 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/85dc2c98-9e06-457d-85be-821a21514762-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tprdh\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.692516 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jldv8\" (UniqueName: \"kubernetes.io/projected/85dc2c98-9e06-457d-85be-821a21514762-kube-api-access-jldv8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tprdh\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.692599 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tprdh\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.692627 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tprdh\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.693476 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/85dc2c98-9e06-457d-85be-821a21514762-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tprdh\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.696186 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tprdh\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.696291 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tprdh\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.707704 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jldv8\" (UniqueName: \"kubernetes.io/projected/85dc2c98-9e06-457d-85be-821a21514762-kube-api-access-jldv8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tprdh\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.712357 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tprdh\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:39 crc kubenswrapper[4678]: I1124 11:52:39.836305 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:52:40 crc kubenswrapper[4678]: I1124 11:52:40.359594 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh"] Nov 24 11:52:40 crc kubenswrapper[4678]: I1124 11:52:40.394273 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" event={"ID":"85dc2c98-9e06-457d-85be-821a21514762","Type":"ContainerStarted","Data":"f2ba6fe78ba689ff15742ca3724bc2d8de102f76ab549eb330dcfc0b5b00a423"} Nov 24 11:52:41 crc kubenswrapper[4678]: I1124 11:52:41.414861 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" event={"ID":"85dc2c98-9e06-457d-85be-821a21514762","Type":"ContainerStarted","Data":"97d26ef457e8f976274d2ca945f584516869dd43a3033f208e187bb5f78436b9"} Nov 24 11:52:41 crc kubenswrapper[4678]: I1124 11:52:41.439886 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" podStartSLOduration=1.958688961 podStartE2EDuration="2.439871067s" podCreationTimestamp="2025-11-24 11:52:39 +0000 UTC" firstStartedPulling="2025-11-24 11:52:40.372948494 +0000 UTC m=+2171.304008133" lastFinishedPulling="2025-11-24 11:52:40.8541306 +0000 UTC m=+2171.785190239" observedRunningTime="2025-11-24 11:52:41.435955222 +0000 UTC m=+2172.367014861" watchObservedRunningTime="2025-11-24 11:52:41.439871067 +0000 UTC m=+2172.370930706" Nov 24 11:52:42 crc kubenswrapper[4678]: I1124 11:52:42.896616 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:52:42 crc kubenswrapper[4678]: E1124 11:52:42.897365 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:52:54 crc kubenswrapper[4678]: I1124 11:52:54.895218 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:52:54 crc kubenswrapper[4678]: E1124 11:52:54.896115 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:53:05 crc kubenswrapper[4678]: I1124 11:53:05.896282 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:53:05 crc kubenswrapper[4678]: E1124 11:53:05.897438 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:53:18 crc kubenswrapper[4678]: I1124 11:53:18.896199 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:53:18 crc kubenswrapper[4678]: E1124 11:53:18.897166 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:53:30 crc kubenswrapper[4678]: I1124 11:53:30.896017 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:53:30 crc kubenswrapper[4678]: E1124 11:53:30.897161 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:53:42 crc kubenswrapper[4678]: I1124 11:53:42.156064 4678 generic.go:334] "Generic (PLEG): container finished" podID="85dc2c98-9e06-457d-85be-821a21514762" containerID="97d26ef457e8f976274d2ca945f584516869dd43a3033f208e187bb5f78436b9" exitCode=0 Nov 24 11:53:42 crc kubenswrapper[4678]: I1124 11:53:42.156142 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" event={"ID":"85dc2c98-9e06-457d-85be-821a21514762","Type":"ContainerDied","Data":"97d26ef457e8f976274d2ca945f584516869dd43a3033f208e187bb5f78436b9"} Nov 24 11:53:43 crc kubenswrapper[4678]: I1124 11:53:43.629831 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:53:43 crc kubenswrapper[4678]: I1124 11:53:43.748105 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-ssh-key\") pod \"85dc2c98-9e06-457d-85be-821a21514762\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " Nov 24 11:53:43 crc kubenswrapper[4678]: I1124 11:53:43.748205 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-inventory\") pod \"85dc2c98-9e06-457d-85be-821a21514762\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " Nov 24 11:53:43 crc kubenswrapper[4678]: I1124 11:53:43.748422 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-ovn-combined-ca-bundle\") pod \"85dc2c98-9e06-457d-85be-821a21514762\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " Nov 24 11:53:43 crc kubenswrapper[4678]: I1124 11:53:43.748502 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/85dc2c98-9e06-457d-85be-821a21514762-ovncontroller-config-0\") pod \"85dc2c98-9e06-457d-85be-821a21514762\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " Nov 24 11:53:43 crc kubenswrapper[4678]: I1124 11:53:43.748535 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jldv8\" (UniqueName: \"kubernetes.io/projected/85dc2c98-9e06-457d-85be-821a21514762-kube-api-access-jldv8\") pod \"85dc2c98-9e06-457d-85be-821a21514762\" (UID: \"85dc2c98-9e06-457d-85be-821a21514762\") " Nov 24 11:53:43 crc kubenswrapper[4678]: I1124 11:53:43.753883 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "85dc2c98-9e06-457d-85be-821a21514762" (UID: "85dc2c98-9e06-457d-85be-821a21514762"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:53:43 crc kubenswrapper[4678]: I1124 11:53:43.757002 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85dc2c98-9e06-457d-85be-821a21514762-kube-api-access-jldv8" (OuterVolumeSpecName: "kube-api-access-jldv8") pod "85dc2c98-9e06-457d-85be-821a21514762" (UID: "85dc2c98-9e06-457d-85be-821a21514762"). InnerVolumeSpecName "kube-api-access-jldv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:53:43 crc kubenswrapper[4678]: I1124 11:53:43.779503 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-inventory" (OuterVolumeSpecName: "inventory") pod "85dc2c98-9e06-457d-85be-821a21514762" (UID: "85dc2c98-9e06-457d-85be-821a21514762"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:53:43 crc kubenswrapper[4678]: I1124 11:53:43.780265 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85dc2c98-9e06-457d-85be-821a21514762-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "85dc2c98-9e06-457d-85be-821a21514762" (UID: "85dc2c98-9e06-457d-85be-821a21514762"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 11:53:43 crc kubenswrapper[4678]: I1124 11:53:43.786840 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "85dc2c98-9e06-457d-85be-821a21514762" (UID: "85dc2c98-9e06-457d-85be-821a21514762"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:53:43 crc kubenswrapper[4678]: I1124 11:53:43.851540 4678 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:53:43 crc kubenswrapper[4678]: I1124 11:53:43.851585 4678 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/85dc2c98-9e06-457d-85be-821a21514762-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:53:43 crc kubenswrapper[4678]: I1124 11:53:43.851603 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jldv8\" (UniqueName: \"kubernetes.io/projected/85dc2c98-9e06-457d-85be-821a21514762-kube-api-access-jldv8\") on node \"crc\" DevicePath \"\"" Nov 24 11:53:43 crc kubenswrapper[4678]: I1124 11:53:43.851616 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:53:43 crc kubenswrapper[4678]: I1124 11:53:43.851628 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85dc2c98-9e06-457d-85be-821a21514762-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.178877 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" event={"ID":"85dc2c98-9e06-457d-85be-821a21514762","Type":"ContainerDied","Data":"f2ba6fe78ba689ff15742ca3724bc2d8de102f76ab549eb330dcfc0b5b00a423"} Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.179165 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2ba6fe78ba689ff15742ca3724bc2d8de102f76ab549eb330dcfc0b5b00a423" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.178985 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tprdh" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.265739 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj"] Nov 24 11:53:44 crc kubenswrapper[4678]: E1124 11:53:44.266421 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85dc2c98-9e06-457d-85be-821a21514762" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.266441 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="85dc2c98-9e06-457d-85be-821a21514762" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.267405 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="85dc2c98-9e06-457d-85be-821a21514762" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.268283 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.271644 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.271738 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.271941 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.272087 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.272133 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.272092 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.294343 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj"] Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.364439 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.364821 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.364919 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxxfq\" (UniqueName: \"kubernetes.io/projected/06c13190-90f2-4686-8ec5-d1c8c8ae6928-kube-api-access-lxxfq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.364987 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.365260 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.365305 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.467623 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.467754 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxxfq\" (UniqueName: \"kubernetes.io/projected/06c13190-90f2-4686-8ec5-d1c8c8ae6928-kube-api-access-lxxfq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.467790 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.467848 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.467866 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.467937 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.471725 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.472086 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.472135 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.472999 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.473512 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.486393 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxxfq\" (UniqueName: \"kubernetes.io/projected/06c13190-90f2-4686-8ec5-d1c8c8ae6928-kube-api-access-lxxfq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.612127 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:53:44 crc kubenswrapper[4678]: I1124 11:53:44.895621 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:53:44 crc kubenswrapper[4678]: E1124 11:53:44.896347 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:53:45 crc kubenswrapper[4678]: I1124 11:53:45.140081 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj"] Nov 24 11:53:45 crc kubenswrapper[4678]: I1124 11:53:45.146633 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:53:45 crc kubenswrapper[4678]: I1124 11:53:45.190159 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" event={"ID":"06c13190-90f2-4686-8ec5-d1c8c8ae6928","Type":"ContainerStarted","Data":"17e3e010ca9bc6b021a7744a1a857bf4d6449fb00b42001278f5119a65e09517"} Nov 24 11:53:51 crc kubenswrapper[4678]: I1124 11:53:51.303045 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" event={"ID":"06c13190-90f2-4686-8ec5-d1c8c8ae6928","Type":"ContainerStarted","Data":"45f01456764c94033cd29dfc2884f4e097b4fbd4525c7d80d2e5519019f78e2b"} Nov 24 11:53:51 crc kubenswrapper[4678]: I1124 11:53:51.342431 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" podStartSLOduration=1.881437498 podStartE2EDuration="7.34241487s" podCreationTimestamp="2025-11-24 11:53:44 +0000 UTC" firstStartedPulling="2025-11-24 11:53:45.146370314 +0000 UTC m=+2236.077429953" lastFinishedPulling="2025-11-24 11:53:50.607347686 +0000 UTC m=+2241.538407325" observedRunningTime="2025-11-24 11:53:51.326385991 +0000 UTC m=+2242.257445640" watchObservedRunningTime="2025-11-24 11:53:51.34241487 +0000 UTC m=+2242.273474509" Nov 24 11:53:57 crc kubenswrapper[4678]: I1124 11:53:57.896155 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:53:57 crc kubenswrapper[4678]: E1124 11:53:57.897084 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:54:11 crc kubenswrapper[4678]: I1124 11:54:11.896236 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:54:11 crc kubenswrapper[4678]: E1124 11:54:11.897410 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:54:26 crc kubenswrapper[4678]: I1124 11:54:26.895973 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:54:26 crc kubenswrapper[4678]: E1124 11:54:26.897096 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:54:36 crc kubenswrapper[4678]: I1124 11:54:36.766072 4678 generic.go:334] "Generic (PLEG): container finished" podID="06c13190-90f2-4686-8ec5-d1c8c8ae6928" containerID="45f01456764c94033cd29dfc2884f4e097b4fbd4525c7d80d2e5519019f78e2b" exitCode=0 Nov 24 11:54:36 crc kubenswrapper[4678]: I1124 11:54:36.766174 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" event={"ID":"06c13190-90f2-4686-8ec5-d1c8c8ae6928","Type":"ContainerDied","Data":"45f01456764c94033cd29dfc2884f4e097b4fbd4525c7d80d2e5519019f78e2b"} Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.257825 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.336912 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-nova-metadata-neutron-config-0\") pod \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.337098 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-neutron-metadata-combined-ca-bundle\") pod \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.337149 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-ssh-key\") pod \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.337298 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-inventory\") pod \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.337332 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-neutron-ovn-metadata-agent-neutron-config-0\") pod \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.337372 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxxfq\" (UniqueName: \"kubernetes.io/projected/06c13190-90f2-4686-8ec5-d1c8c8ae6928-kube-api-access-lxxfq\") pod \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\" (UID: \"06c13190-90f2-4686-8ec5-d1c8c8ae6928\") " Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.343255 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c13190-90f2-4686-8ec5-d1c8c8ae6928-kube-api-access-lxxfq" (OuterVolumeSpecName: "kube-api-access-lxxfq") pod "06c13190-90f2-4686-8ec5-d1c8c8ae6928" (UID: "06c13190-90f2-4686-8ec5-d1c8c8ae6928"). InnerVolumeSpecName "kube-api-access-lxxfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.343725 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "06c13190-90f2-4686-8ec5-d1c8c8ae6928" (UID: "06c13190-90f2-4686-8ec5-d1c8c8ae6928"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.369216 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "06c13190-90f2-4686-8ec5-d1c8c8ae6928" (UID: "06c13190-90f2-4686-8ec5-d1c8c8ae6928"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.370537 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "06c13190-90f2-4686-8ec5-d1c8c8ae6928" (UID: "06c13190-90f2-4686-8ec5-d1c8c8ae6928"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.379154 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-inventory" (OuterVolumeSpecName: "inventory") pod "06c13190-90f2-4686-8ec5-d1c8c8ae6928" (UID: "06c13190-90f2-4686-8ec5-d1c8c8ae6928"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.380697 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "06c13190-90f2-4686-8ec5-d1c8c8ae6928" (UID: "06c13190-90f2-4686-8ec5-d1c8c8ae6928"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.453781 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.453831 4678 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.453848 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxxfq\" (UniqueName: \"kubernetes.io/projected/06c13190-90f2-4686-8ec5-d1c8c8ae6928-kube-api-access-lxxfq\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.453863 4678 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.453877 4678 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.453889 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06c13190-90f2-4686-8ec5-d1c8c8ae6928-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.785337 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" event={"ID":"06c13190-90f2-4686-8ec5-d1c8c8ae6928","Type":"ContainerDied","Data":"17e3e010ca9bc6b021a7744a1a857bf4d6449fb00b42001278f5119a65e09517"} Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.785374 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e3e010ca9bc6b021a7744a1a857bf4d6449fb00b42001278f5119a65e09517" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.785618 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.898017 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4"] Nov 24 11:54:38 crc kubenswrapper[4678]: E1124 11:54:38.898502 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c13190-90f2-4686-8ec5-d1c8c8ae6928" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.898520 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c13190-90f2-4686-8ec5-d1c8c8ae6928" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.898799 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="06c13190-90f2-4686-8ec5-d1c8c8ae6928" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.899649 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.902163 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.902528 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.902735 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.903229 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.903397 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.909764 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4"] Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.966190 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-87nh4\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.966251 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-87nh4\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.966451 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbzhv\" (UniqueName: \"kubernetes.io/projected/3c6b4924-9f1f-4528-bb08-480676547ff8-kube-api-access-tbzhv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-87nh4\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.966952 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-87nh4\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:38 crc kubenswrapper[4678]: I1124 11:54:38.967086 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-87nh4\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:39 crc kubenswrapper[4678]: I1124 11:54:39.069885 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-87nh4\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:39 crc kubenswrapper[4678]: I1124 11:54:39.070320 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-87nh4\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:39 crc kubenswrapper[4678]: I1124 11:54:39.070428 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbzhv\" (UniqueName: \"kubernetes.io/projected/3c6b4924-9f1f-4528-bb08-480676547ff8-kube-api-access-tbzhv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-87nh4\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:39 crc kubenswrapper[4678]: I1124 11:54:39.070570 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-87nh4\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:39 crc kubenswrapper[4678]: I1124 11:54:39.070653 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-87nh4\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:39 crc kubenswrapper[4678]: I1124 11:54:39.073964 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-87nh4\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:39 crc kubenswrapper[4678]: I1124 11:54:39.073964 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-87nh4\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:39 crc kubenswrapper[4678]: I1124 11:54:39.074980 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-87nh4\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:39 crc kubenswrapper[4678]: I1124 11:54:39.077072 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-87nh4\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:39 crc kubenswrapper[4678]: I1124 11:54:39.094926 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbzhv\" (UniqueName: \"kubernetes.io/projected/3c6b4924-9f1f-4528-bb08-480676547ff8-kube-api-access-tbzhv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-87nh4\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:39 crc kubenswrapper[4678]: I1124 11:54:39.218003 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:54:39 crc kubenswrapper[4678]: I1124 11:54:39.785490 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4"] Nov 24 11:54:40 crc kubenswrapper[4678]: I1124 11:54:40.824683 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" event={"ID":"3c6b4924-9f1f-4528-bb08-480676547ff8","Type":"ContainerStarted","Data":"ec044a42a915af18593ad85893fcbd621c90f539817fa002a8a041d1bdb027d9"} Nov 24 11:54:40 crc kubenswrapper[4678]: I1124 11:54:40.824729 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" event={"ID":"3c6b4924-9f1f-4528-bb08-480676547ff8","Type":"ContainerStarted","Data":"788fcfadb91dcea11dbb35d253a4df79c63a1db191fd44914a84848109d7fb5c"} Nov 24 11:54:40 crc kubenswrapper[4678]: I1124 11:54:40.841763 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" podStartSLOduration=2.3753254950000002 podStartE2EDuration="2.841734815s" podCreationTimestamp="2025-11-24 11:54:38 +0000 UTC" firstStartedPulling="2025-11-24 11:54:39.803356617 +0000 UTC m=+2290.734416266" lastFinishedPulling="2025-11-24 11:54:40.269765937 +0000 UTC m=+2291.200825586" observedRunningTime="2025-11-24 11:54:40.839956267 +0000 UTC m=+2291.771015916" watchObservedRunningTime="2025-11-24 11:54:40.841734815 +0000 UTC m=+2291.772794454" Nov 24 11:54:40 crc kubenswrapper[4678]: I1124 11:54:40.895699 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:54:40 crc kubenswrapper[4678]: E1124 11:54:40.896800 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:54:52 crc kubenswrapper[4678]: I1124 11:54:52.897009 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:54:52 crc kubenswrapper[4678]: E1124 11:54:52.898320 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:55:06 crc kubenswrapper[4678]: I1124 11:55:06.897914 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:55:06 crc kubenswrapper[4678]: E1124 11:55:06.899465 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:55:18 crc kubenswrapper[4678]: I1124 11:55:18.896701 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:55:18 crc kubenswrapper[4678]: E1124 11:55:18.897627 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:55:30 crc kubenswrapper[4678]: I1124 11:55:30.895939 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:55:30 crc kubenswrapper[4678]: E1124 11:55:30.896728 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:55:43 crc kubenswrapper[4678]: I1124 11:55:43.897104 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:55:43 crc kubenswrapper[4678]: E1124 11:55:43.897938 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:55:55 crc kubenswrapper[4678]: I1124 11:55:55.896883 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:55:55 crc kubenswrapper[4678]: E1124 11:55:55.897835 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:56:10 crc kubenswrapper[4678]: I1124 11:56:10.896015 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:56:10 crc kubenswrapper[4678]: E1124 11:56:10.896801 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:56:25 crc kubenswrapper[4678]: I1124 11:56:25.896768 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:56:25 crc kubenswrapper[4678]: E1124 11:56:25.898358 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:56:40 crc kubenswrapper[4678]: I1124 11:56:40.895619 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:56:40 crc kubenswrapper[4678]: E1124 11:56:40.896766 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:56:52 crc kubenswrapper[4678]: I1124 11:56:52.896254 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:56:52 crc kubenswrapper[4678]: E1124 11:56:52.897227 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:57:04 crc kubenswrapper[4678]: I1124 11:57:04.897238 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:57:04 crc kubenswrapper[4678]: E1124 11:57:04.898554 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:57:17 crc kubenswrapper[4678]: I1124 11:57:17.896806 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:57:17 crc kubenswrapper[4678]: E1124 11:57:17.897703 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 11:57:30 crc kubenswrapper[4678]: I1124 11:57:30.896127 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 11:57:31 crc kubenswrapper[4678]: I1124 11:57:31.867611 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"d60c05291373c2a59fe98401e152effd6edd15bd4a9cf084c09c97e923c9a838"} Nov 24 11:58:48 crc kubenswrapper[4678]: I1124 11:58:48.723922 4678 generic.go:334] "Generic (PLEG): container finished" podID="3c6b4924-9f1f-4528-bb08-480676547ff8" containerID="ec044a42a915af18593ad85893fcbd621c90f539817fa002a8a041d1bdb027d9" exitCode=0 Nov 24 11:58:48 crc kubenswrapper[4678]: I1124 11:58:48.724003 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" event={"ID":"3c6b4924-9f1f-4528-bb08-480676547ff8","Type":"ContainerDied","Data":"ec044a42a915af18593ad85893fcbd621c90f539817fa002a8a041d1bdb027d9"} Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.376284 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.550957 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbzhv\" (UniqueName: \"kubernetes.io/projected/3c6b4924-9f1f-4528-bb08-480676547ff8-kube-api-access-tbzhv\") pod \"3c6b4924-9f1f-4528-bb08-480676547ff8\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.551264 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-inventory\") pod \"3c6b4924-9f1f-4528-bb08-480676547ff8\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.551349 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-libvirt-secret-0\") pod \"3c6b4924-9f1f-4528-bb08-480676547ff8\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.551439 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-ssh-key\") pod \"3c6b4924-9f1f-4528-bb08-480676547ff8\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.551469 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-libvirt-combined-ca-bundle\") pod \"3c6b4924-9f1f-4528-bb08-480676547ff8\" (UID: \"3c6b4924-9f1f-4528-bb08-480676547ff8\") " Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.557136 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "3c6b4924-9f1f-4528-bb08-480676547ff8" (UID: "3c6b4924-9f1f-4528-bb08-480676547ff8"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.557782 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6b4924-9f1f-4528-bb08-480676547ff8-kube-api-access-tbzhv" (OuterVolumeSpecName: "kube-api-access-tbzhv") pod "3c6b4924-9f1f-4528-bb08-480676547ff8" (UID: "3c6b4924-9f1f-4528-bb08-480676547ff8"). InnerVolumeSpecName "kube-api-access-tbzhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.590230 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-inventory" (OuterVolumeSpecName: "inventory") pod "3c6b4924-9f1f-4528-bb08-480676547ff8" (UID: "3c6b4924-9f1f-4528-bb08-480676547ff8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.591499 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3c6b4924-9f1f-4528-bb08-480676547ff8" (UID: "3c6b4924-9f1f-4528-bb08-480676547ff8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.594470 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "3c6b4924-9f1f-4528-bb08-480676547ff8" (UID: "3c6b4924-9f1f-4528-bb08-480676547ff8"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.654190 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.654227 4678 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.654246 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.654259 4678 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6b4924-9f1f-4528-bb08-480676547ff8-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.654270 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbzhv\" (UniqueName: \"kubernetes.io/projected/3c6b4924-9f1f-4528-bb08-480676547ff8-kube-api-access-tbzhv\") on node \"crc\" DevicePath \"\"" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.746578 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" event={"ID":"3c6b4924-9f1f-4528-bb08-480676547ff8","Type":"ContainerDied","Data":"788fcfadb91dcea11dbb35d253a4df79c63a1db191fd44914a84848109d7fb5c"} Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.746625 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="788fcfadb91dcea11dbb35d253a4df79c63a1db191fd44914a84848109d7fb5c" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.746712 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-87nh4" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.860607 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt"] Nov 24 11:58:50 crc kubenswrapper[4678]: E1124 11:58:50.861514 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6b4924-9f1f-4528-bb08-480676547ff8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.861537 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6b4924-9f1f-4528-bb08-480676547ff8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.861903 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6b4924-9f1f-4528-bb08-480676547ff8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.862921 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.865739 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.865790 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.866000 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.866040 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.866154 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.866270 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.868208 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.871365 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt"] Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.961686 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.961736 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/23808fd9-feff-4e7c-835e-dd9658816050-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.961785 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.961876 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.961901 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8dmw\" (UniqueName: \"kubernetes.io/projected/23808fd9-feff-4e7c-835e-dd9658816050-kube-api-access-w8dmw\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.961958 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.961979 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.962014 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:50 crc kubenswrapper[4678]: I1124 11:58:50.962063 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.063945 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.064028 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8dmw\" (UniqueName: \"kubernetes.io/projected/23808fd9-feff-4e7c-835e-dd9658816050-kube-api-access-w8dmw\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.064181 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.064231 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.064266 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.064364 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.064397 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.064417 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/23808fd9-feff-4e7c-835e-dd9658816050-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.064455 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.066313 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/23808fd9-feff-4e7c-835e-dd9658816050-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.069857 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.069842 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.070859 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.070963 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.071044 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.071949 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.072950 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.084653 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8dmw\" (UniqueName: \"kubernetes.io/projected/23808fd9-feff-4e7c-835e-dd9658816050-kube-api-access-w8dmw\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p2rvt\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.192134 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.782024 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt"] Nov 24 11:58:51 crc kubenswrapper[4678]: I1124 11:58:51.791284 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 11:58:52 crc kubenswrapper[4678]: I1124 11:58:52.773457 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" event={"ID":"23808fd9-feff-4e7c-835e-dd9658816050","Type":"ContainerStarted","Data":"4e7739be64f0e8f072da5656ae95262f8be8f179c744dba4204e7ec78fd45594"} Nov 24 11:58:52 crc kubenswrapper[4678]: I1124 11:58:52.774250 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" event={"ID":"23808fd9-feff-4e7c-835e-dd9658816050","Type":"ContainerStarted","Data":"7ecc9fa4afde720408ce65d3fc3ae6b09bf6b38bdf4bf3b8fd908a87ff6b0d86"} Nov 24 11:58:52 crc kubenswrapper[4678]: I1124 11:58:52.810771 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" podStartSLOduration=2.293772353 podStartE2EDuration="2.810742471s" podCreationTimestamp="2025-11-24 11:58:50 +0000 UTC" firstStartedPulling="2025-11-24 11:58:51.790995807 +0000 UTC m=+2542.722055446" lastFinishedPulling="2025-11-24 11:58:52.307965925 +0000 UTC m=+2543.239025564" observedRunningTime="2025-11-24 11:58:52.80661917 +0000 UTC m=+2543.737678809" watchObservedRunningTime="2025-11-24 11:58:52.810742471 +0000 UTC m=+2543.741802110" Nov 24 11:59:42 crc kubenswrapper[4678]: I1124 11:59:42.695013 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-74f7b98495-b5gj8" podUID="95ada9de-2ac2-4ea9-9d4d-0ef4293da59f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.156698 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks"] Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.158967 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.162378 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.162513 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.168239 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks"] Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.284375 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n88h8\" (UniqueName: \"kubernetes.io/projected/ccb98269-f363-4f12-9736-6f3e6723aa0b-kube-api-access-n88h8\") pod \"collect-profiles-29399760-fsbks\" (UID: \"ccb98269-f363-4f12-9736-6f3e6723aa0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.284759 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccb98269-f363-4f12-9736-6f3e6723aa0b-secret-volume\") pod \"collect-profiles-29399760-fsbks\" (UID: \"ccb98269-f363-4f12-9736-6f3e6723aa0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.284949 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccb98269-f363-4f12-9736-6f3e6723aa0b-config-volume\") pod \"collect-profiles-29399760-fsbks\" (UID: \"ccb98269-f363-4f12-9736-6f3e6723aa0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.296705 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.296789 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.387852 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccb98269-f363-4f12-9736-6f3e6723aa0b-secret-volume\") pod \"collect-profiles-29399760-fsbks\" (UID: \"ccb98269-f363-4f12-9736-6f3e6723aa0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.387965 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccb98269-f363-4f12-9736-6f3e6723aa0b-config-volume\") pod \"collect-profiles-29399760-fsbks\" (UID: \"ccb98269-f363-4f12-9736-6f3e6723aa0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.388540 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n88h8\" (UniqueName: \"kubernetes.io/projected/ccb98269-f363-4f12-9736-6f3e6723aa0b-kube-api-access-n88h8\") pod \"collect-profiles-29399760-fsbks\" (UID: \"ccb98269-f363-4f12-9736-6f3e6723aa0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.389539 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccb98269-f363-4f12-9736-6f3e6723aa0b-config-volume\") pod \"collect-profiles-29399760-fsbks\" (UID: \"ccb98269-f363-4f12-9736-6f3e6723aa0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.397173 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccb98269-f363-4f12-9736-6f3e6723aa0b-secret-volume\") pod \"collect-profiles-29399760-fsbks\" (UID: \"ccb98269-f363-4f12-9736-6f3e6723aa0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.408066 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n88h8\" (UniqueName: \"kubernetes.io/projected/ccb98269-f363-4f12-9736-6f3e6723aa0b-kube-api-access-n88h8\") pod \"collect-profiles-29399760-fsbks\" (UID: \"ccb98269-f363-4f12-9736-6f3e6723aa0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.481486 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" Nov 24 12:00:00 crc kubenswrapper[4678]: I1124 12:00:00.974345 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks"] Nov 24 12:00:01 crc kubenswrapper[4678]: I1124 12:00:01.424843 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" event={"ID":"ccb98269-f363-4f12-9736-6f3e6723aa0b","Type":"ContainerStarted","Data":"898e512ce07f91afbf276a656fb9929741073282892a94c4c0cbfb120c507daf"} Nov 24 12:00:01 crc kubenswrapper[4678]: I1124 12:00:01.425161 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" event={"ID":"ccb98269-f363-4f12-9736-6f3e6723aa0b","Type":"ContainerStarted","Data":"fca1bf2a7956654764d664d44d80d71f485d6578c203f2b530368e656e960127"} Nov 24 12:00:01 crc kubenswrapper[4678]: I1124 12:00:01.444075 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" podStartSLOduration=1.444057261 podStartE2EDuration="1.444057261s" podCreationTimestamp="2025-11-24 12:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:00:01.440926176 +0000 UTC m=+2612.371985815" watchObservedRunningTime="2025-11-24 12:00:01.444057261 +0000 UTC m=+2612.375116910" Nov 24 12:00:02 crc kubenswrapper[4678]: I1124 12:00:02.442483 4678 generic.go:334] "Generic (PLEG): container finished" podID="ccb98269-f363-4f12-9736-6f3e6723aa0b" containerID="898e512ce07f91afbf276a656fb9929741073282892a94c4c0cbfb120c507daf" exitCode=0 Nov 24 12:00:02 crc kubenswrapper[4678]: I1124 12:00:02.442816 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" event={"ID":"ccb98269-f363-4f12-9736-6f3e6723aa0b","Type":"ContainerDied","Data":"898e512ce07f91afbf276a656fb9929741073282892a94c4c0cbfb120c507daf"} Nov 24 12:00:03 crc kubenswrapper[4678]: I1124 12:00:03.924399 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" Nov 24 12:00:03 crc kubenswrapper[4678]: I1124 12:00:03.988342 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccb98269-f363-4f12-9736-6f3e6723aa0b-secret-volume\") pod \"ccb98269-f363-4f12-9736-6f3e6723aa0b\" (UID: \"ccb98269-f363-4f12-9736-6f3e6723aa0b\") " Nov 24 12:00:03 crc kubenswrapper[4678]: I1124 12:00:03.988544 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n88h8\" (UniqueName: \"kubernetes.io/projected/ccb98269-f363-4f12-9736-6f3e6723aa0b-kube-api-access-n88h8\") pod \"ccb98269-f363-4f12-9736-6f3e6723aa0b\" (UID: \"ccb98269-f363-4f12-9736-6f3e6723aa0b\") " Nov 24 12:00:03 crc kubenswrapper[4678]: I1124 12:00:03.988930 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccb98269-f363-4f12-9736-6f3e6723aa0b-config-volume\") pod \"ccb98269-f363-4f12-9736-6f3e6723aa0b\" (UID: \"ccb98269-f363-4f12-9736-6f3e6723aa0b\") " Nov 24 12:00:03 crc kubenswrapper[4678]: I1124 12:00:03.989906 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccb98269-f363-4f12-9736-6f3e6723aa0b-config-volume" (OuterVolumeSpecName: "config-volume") pod "ccb98269-f363-4f12-9736-6f3e6723aa0b" (UID: "ccb98269-f363-4f12-9736-6f3e6723aa0b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:00:03 crc kubenswrapper[4678]: I1124 12:00:03.991149 4678 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccb98269-f363-4f12-9736-6f3e6723aa0b-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:04 crc kubenswrapper[4678]: I1124 12:00:04.005092 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb98269-f363-4f12-9736-6f3e6723aa0b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ccb98269-f363-4f12-9736-6f3e6723aa0b" (UID: "ccb98269-f363-4f12-9736-6f3e6723aa0b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:00:04 crc kubenswrapper[4678]: I1124 12:00:04.007020 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccb98269-f363-4f12-9736-6f3e6723aa0b-kube-api-access-n88h8" (OuterVolumeSpecName: "kube-api-access-n88h8") pod "ccb98269-f363-4f12-9736-6f3e6723aa0b" (UID: "ccb98269-f363-4f12-9736-6f3e6723aa0b"). InnerVolumeSpecName "kube-api-access-n88h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:00:04 crc kubenswrapper[4678]: I1124 12:00:04.093148 4678 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccb98269-f363-4f12-9736-6f3e6723aa0b-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:04 crc kubenswrapper[4678]: I1124 12:00:04.093345 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n88h8\" (UniqueName: \"kubernetes.io/projected/ccb98269-f363-4f12-9736-6f3e6723aa0b-kube-api-access-n88h8\") on node \"crc\" DevicePath \"\"" Nov 24 12:00:04 crc kubenswrapper[4678]: I1124 12:00:04.480167 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" event={"ID":"ccb98269-f363-4f12-9736-6f3e6723aa0b","Type":"ContainerDied","Data":"fca1bf2a7956654764d664d44d80d71f485d6578c203f2b530368e656e960127"} Nov 24 12:00:04 crc kubenswrapper[4678]: I1124 12:00:04.480238 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca1bf2a7956654764d664d44d80d71f485d6578c203f2b530368e656e960127" Nov 24 12:00:04 crc kubenswrapper[4678]: I1124 12:00:04.480268 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks" Nov 24 12:00:04 crc kubenswrapper[4678]: I1124 12:00:04.552051 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj"] Nov 24 12:00:04 crc kubenswrapper[4678]: I1124 12:00:04.567187 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399715-h2fzj"] Nov 24 12:00:07 crc kubenswrapper[4678]: I1124 12:00:07.025106 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daea8216-5097-43f5-913a-eda16abaf508" path="/var/lib/kubelet/pods/daea8216-5097-43f5-913a-eda16abaf508/volumes" Nov 24 12:00:07 crc kubenswrapper[4678]: E1124 12:00:07.030767 4678 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.136s" Nov 24 12:00:30 crc kubenswrapper[4678]: I1124 12:00:30.296995 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:00:30 crc kubenswrapper[4678]: I1124 12:00:30.297979 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:00:37 crc kubenswrapper[4678]: I1124 12:00:37.711510 4678 scope.go:117] "RemoveContainer" containerID="795be823b1b1551d8ba9b667b4101d5059f40c8d7daa8be3adc7ead041418d4f" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.172787 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29399761-g59n6"] Nov 24 12:01:00 crc kubenswrapper[4678]: E1124 12:01:00.174602 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb98269-f363-4f12-9736-6f3e6723aa0b" containerName="collect-profiles" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.174625 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb98269-f363-4f12-9736-6f3e6723aa0b" containerName="collect-profiles" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.175030 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb98269-f363-4f12-9736-6f3e6723aa0b" containerName="collect-profiles" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.176409 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399761-g59n6" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.188170 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399761-g59n6"] Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.292333 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-config-data\") pod \"keystone-cron-29399761-g59n6\" (UID: \"25a49349-3ad1-4efb-a5b3-851d707c47ac\") " pod="openstack/keystone-cron-29399761-g59n6" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.292937 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-fernet-keys\") pod \"keystone-cron-29399761-g59n6\" (UID: \"25a49349-3ad1-4efb-a5b3-851d707c47ac\") " pod="openstack/keystone-cron-29399761-g59n6" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.292984 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbpvc\" (UniqueName: \"kubernetes.io/projected/25a49349-3ad1-4efb-a5b3-851d707c47ac-kube-api-access-wbpvc\") pod \"keystone-cron-29399761-g59n6\" (UID: \"25a49349-3ad1-4efb-a5b3-851d707c47ac\") " pod="openstack/keystone-cron-29399761-g59n6" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.293042 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-combined-ca-bundle\") pod \"keystone-cron-29399761-g59n6\" (UID: \"25a49349-3ad1-4efb-a5b3-851d707c47ac\") " pod="openstack/keystone-cron-29399761-g59n6" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.296765 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.296940 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.297049 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.300515 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d60c05291373c2a59fe98401e152effd6edd15bd4a9cf084c09c97e923c9a838"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.300659 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://d60c05291373c2a59fe98401e152effd6edd15bd4a9cf084c09c97e923c9a838" gracePeriod=600 Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.397145 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-config-data\") pod \"keystone-cron-29399761-g59n6\" (UID: \"25a49349-3ad1-4efb-a5b3-851d707c47ac\") " pod="openstack/keystone-cron-29399761-g59n6" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.397319 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-fernet-keys\") pod \"keystone-cron-29399761-g59n6\" (UID: \"25a49349-3ad1-4efb-a5b3-851d707c47ac\") " pod="openstack/keystone-cron-29399761-g59n6" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.397426 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbpvc\" (UniqueName: \"kubernetes.io/projected/25a49349-3ad1-4efb-a5b3-851d707c47ac-kube-api-access-wbpvc\") pod \"keystone-cron-29399761-g59n6\" (UID: \"25a49349-3ad1-4efb-a5b3-851d707c47ac\") " pod="openstack/keystone-cron-29399761-g59n6" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.397572 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-combined-ca-bundle\") pod \"keystone-cron-29399761-g59n6\" (UID: \"25a49349-3ad1-4efb-a5b3-851d707c47ac\") " pod="openstack/keystone-cron-29399761-g59n6" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.409810 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-config-data\") pod \"keystone-cron-29399761-g59n6\" (UID: \"25a49349-3ad1-4efb-a5b3-851d707c47ac\") " pod="openstack/keystone-cron-29399761-g59n6" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.410222 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-combined-ca-bundle\") pod \"keystone-cron-29399761-g59n6\" (UID: \"25a49349-3ad1-4efb-a5b3-851d707c47ac\") " pod="openstack/keystone-cron-29399761-g59n6" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.418582 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-fernet-keys\") pod \"keystone-cron-29399761-g59n6\" (UID: \"25a49349-3ad1-4efb-a5b3-851d707c47ac\") " pod="openstack/keystone-cron-29399761-g59n6" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.422245 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbpvc\" (UniqueName: \"kubernetes.io/projected/25a49349-3ad1-4efb-a5b3-851d707c47ac-kube-api-access-wbpvc\") pod \"keystone-cron-29399761-g59n6\" (UID: \"25a49349-3ad1-4efb-a5b3-851d707c47ac\") " pod="openstack/keystone-cron-29399761-g59n6" Nov 24 12:01:00 crc kubenswrapper[4678]: I1124 12:01:00.515867 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399761-g59n6" Nov 24 12:01:01 crc kubenswrapper[4678]: I1124 12:01:01.042999 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399761-g59n6"] Nov 24 12:01:01 crc kubenswrapper[4678]: W1124 12:01:01.047004 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25a49349_3ad1_4efb_a5b3_851d707c47ac.slice/crio-e1821fdeaa976a1040bfeb702b69050d1562814c59de0333ede5604dd68294ec WatchSource:0}: Error finding container e1821fdeaa976a1040bfeb702b69050d1562814c59de0333ede5604dd68294ec: Status 404 returned error can't find the container with id e1821fdeaa976a1040bfeb702b69050d1562814c59de0333ede5604dd68294ec Nov 24 12:01:01 crc kubenswrapper[4678]: I1124 12:01:01.150273 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="d60c05291373c2a59fe98401e152effd6edd15bd4a9cf084c09c97e923c9a838" exitCode=0 Nov 24 12:01:01 crc kubenswrapper[4678]: I1124 12:01:01.150362 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"d60c05291373c2a59fe98401e152effd6edd15bd4a9cf084c09c97e923c9a838"} Nov 24 12:01:01 crc kubenswrapper[4678]: I1124 12:01:01.150898 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533"} Nov 24 12:01:01 crc kubenswrapper[4678]: I1124 12:01:01.150926 4678 scope.go:117] "RemoveContainer" containerID="4fb335408cd1e374704600405072c6bfeb7a529e69b703a7c48eae6587602af7" Nov 24 12:01:01 crc kubenswrapper[4678]: I1124 12:01:01.155167 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399761-g59n6" event={"ID":"25a49349-3ad1-4efb-a5b3-851d707c47ac","Type":"ContainerStarted","Data":"e1821fdeaa976a1040bfeb702b69050d1562814c59de0333ede5604dd68294ec"} Nov 24 12:01:02 crc kubenswrapper[4678]: I1124 12:01:02.166608 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399761-g59n6" event={"ID":"25a49349-3ad1-4efb-a5b3-851d707c47ac","Type":"ContainerStarted","Data":"f8f05ee08a08acd7a03b901ae749f7b707fa6b15a59ddba3e28e3e9506a2e4ab"} Nov 24 12:01:02 crc kubenswrapper[4678]: I1124 12:01:02.193908 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29399761-g59n6" podStartSLOduration=2.19388132 podStartE2EDuration="2.19388132s" podCreationTimestamp="2025-11-24 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:01:02.18755354 +0000 UTC m=+2673.118613189" watchObservedRunningTime="2025-11-24 12:01:02.19388132 +0000 UTC m=+2673.124940969" Nov 24 12:01:06 crc kubenswrapper[4678]: I1124 12:01:06.221206 4678 generic.go:334] "Generic (PLEG): container finished" podID="25a49349-3ad1-4efb-a5b3-851d707c47ac" containerID="f8f05ee08a08acd7a03b901ae749f7b707fa6b15a59ddba3e28e3e9506a2e4ab" exitCode=0 Nov 24 12:01:06 crc kubenswrapper[4678]: I1124 12:01:06.221320 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399761-g59n6" event={"ID":"25a49349-3ad1-4efb-a5b3-851d707c47ac","Type":"ContainerDied","Data":"f8f05ee08a08acd7a03b901ae749f7b707fa6b15a59ddba3e28e3e9506a2e4ab"} Nov 24 12:01:07 crc kubenswrapper[4678]: I1124 12:01:07.731862 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399761-g59n6" Nov 24 12:01:07 crc kubenswrapper[4678]: I1124 12:01:07.832317 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-fernet-keys\") pod \"25a49349-3ad1-4efb-a5b3-851d707c47ac\" (UID: \"25a49349-3ad1-4efb-a5b3-851d707c47ac\") " Nov 24 12:01:07 crc kubenswrapper[4678]: I1124 12:01:07.832839 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-combined-ca-bundle\") pod \"25a49349-3ad1-4efb-a5b3-851d707c47ac\" (UID: \"25a49349-3ad1-4efb-a5b3-851d707c47ac\") " Nov 24 12:01:07 crc kubenswrapper[4678]: I1124 12:01:07.832989 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-config-data\") pod \"25a49349-3ad1-4efb-a5b3-851d707c47ac\" (UID: \"25a49349-3ad1-4efb-a5b3-851d707c47ac\") " Nov 24 12:01:07 crc kubenswrapper[4678]: I1124 12:01:07.833071 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbpvc\" (UniqueName: \"kubernetes.io/projected/25a49349-3ad1-4efb-a5b3-851d707c47ac-kube-api-access-wbpvc\") pod \"25a49349-3ad1-4efb-a5b3-851d707c47ac\" (UID: \"25a49349-3ad1-4efb-a5b3-851d707c47ac\") " Nov 24 12:01:07 crc kubenswrapper[4678]: I1124 12:01:07.841181 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "25a49349-3ad1-4efb-a5b3-851d707c47ac" (UID: "25a49349-3ad1-4efb-a5b3-851d707c47ac"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:07 crc kubenswrapper[4678]: I1124 12:01:07.842154 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a49349-3ad1-4efb-a5b3-851d707c47ac-kube-api-access-wbpvc" (OuterVolumeSpecName: "kube-api-access-wbpvc") pod "25a49349-3ad1-4efb-a5b3-851d707c47ac" (UID: "25a49349-3ad1-4efb-a5b3-851d707c47ac"). InnerVolumeSpecName "kube-api-access-wbpvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:01:07 crc kubenswrapper[4678]: I1124 12:01:07.898200 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25a49349-3ad1-4efb-a5b3-851d707c47ac" (UID: "25a49349-3ad1-4efb-a5b3-851d707c47ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:07 crc kubenswrapper[4678]: I1124 12:01:07.908482 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-config-data" (OuterVolumeSpecName: "config-data") pod "25a49349-3ad1-4efb-a5b3-851d707c47ac" (UID: "25a49349-3ad1-4efb-a5b3-851d707c47ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:07 crc kubenswrapper[4678]: I1124 12:01:07.939963 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:07 crc kubenswrapper[4678]: I1124 12:01:07.940229 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:07 crc kubenswrapper[4678]: I1124 12:01:07.940310 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbpvc\" (UniqueName: \"kubernetes.io/projected/25a49349-3ad1-4efb-a5b3-851d707c47ac-kube-api-access-wbpvc\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:07 crc kubenswrapper[4678]: I1124 12:01:07.940397 4678 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25a49349-3ad1-4efb-a5b3-851d707c47ac-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:08 crc kubenswrapper[4678]: I1124 12:01:08.253822 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399761-g59n6" event={"ID":"25a49349-3ad1-4efb-a5b3-851d707c47ac","Type":"ContainerDied","Data":"e1821fdeaa976a1040bfeb702b69050d1562814c59de0333ede5604dd68294ec"} Nov 24 12:01:08 crc kubenswrapper[4678]: I1124 12:01:08.254241 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1821fdeaa976a1040bfeb702b69050d1562814c59de0333ede5604dd68294ec" Nov 24 12:01:08 crc kubenswrapper[4678]: I1124 12:01:08.253915 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399761-g59n6" Nov 24 12:01:21 crc kubenswrapper[4678]: E1124 12:01:21.093128 4678 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.198s" Nov 24 12:01:21 crc kubenswrapper[4678]: I1124 12:01:21.472236 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gqf75"] Nov 24 12:01:21 crc kubenswrapper[4678]: E1124 12:01:21.473195 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a49349-3ad1-4efb-a5b3-851d707c47ac" containerName="keystone-cron" Nov 24 12:01:21 crc kubenswrapper[4678]: I1124 12:01:21.473238 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a49349-3ad1-4efb-a5b3-851d707c47ac" containerName="keystone-cron" Nov 24 12:01:21 crc kubenswrapper[4678]: I1124 12:01:21.474132 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="25a49349-3ad1-4efb-a5b3-851d707c47ac" containerName="keystone-cron" Nov 24 12:01:21 crc kubenswrapper[4678]: I1124 12:01:21.477810 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:21 crc kubenswrapper[4678]: I1124 12:01:21.484899 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gqf75"] Nov 24 12:01:21 crc kubenswrapper[4678]: I1124 12:01:21.507616 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldv5c\" (UniqueName: \"kubernetes.io/projected/e6e307aa-d3ab-45a2-8616-03208ee14794-kube-api-access-ldv5c\") pod \"certified-operators-gqf75\" (UID: \"e6e307aa-d3ab-45a2-8616-03208ee14794\") " pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:21 crc kubenswrapper[4678]: I1124 12:01:21.507839 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e307aa-d3ab-45a2-8616-03208ee14794-utilities\") pod \"certified-operators-gqf75\" (UID: \"e6e307aa-d3ab-45a2-8616-03208ee14794\") " pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:21 crc kubenswrapper[4678]: I1124 12:01:21.507900 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e307aa-d3ab-45a2-8616-03208ee14794-catalog-content\") pod \"certified-operators-gqf75\" (UID: \"e6e307aa-d3ab-45a2-8616-03208ee14794\") " pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:21 crc kubenswrapper[4678]: I1124 12:01:21.610850 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e307aa-d3ab-45a2-8616-03208ee14794-utilities\") pod \"certified-operators-gqf75\" (UID: \"e6e307aa-d3ab-45a2-8616-03208ee14794\") " pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:21 crc kubenswrapper[4678]: I1124 12:01:21.610952 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e307aa-d3ab-45a2-8616-03208ee14794-catalog-content\") pod \"certified-operators-gqf75\" (UID: \"e6e307aa-d3ab-45a2-8616-03208ee14794\") " pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:21 crc kubenswrapper[4678]: I1124 12:01:21.611099 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldv5c\" (UniqueName: \"kubernetes.io/projected/e6e307aa-d3ab-45a2-8616-03208ee14794-kube-api-access-ldv5c\") pod \"certified-operators-gqf75\" (UID: \"e6e307aa-d3ab-45a2-8616-03208ee14794\") " pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:21 crc kubenswrapper[4678]: I1124 12:01:21.611587 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e307aa-d3ab-45a2-8616-03208ee14794-utilities\") pod \"certified-operators-gqf75\" (UID: \"e6e307aa-d3ab-45a2-8616-03208ee14794\") " pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:21 crc kubenswrapper[4678]: I1124 12:01:21.612157 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e307aa-d3ab-45a2-8616-03208ee14794-catalog-content\") pod \"certified-operators-gqf75\" (UID: \"e6e307aa-d3ab-45a2-8616-03208ee14794\") " pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:21 crc kubenswrapper[4678]: I1124 12:01:21.634037 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldv5c\" (UniqueName: \"kubernetes.io/projected/e6e307aa-d3ab-45a2-8616-03208ee14794-kube-api-access-ldv5c\") pod \"certified-operators-gqf75\" (UID: \"e6e307aa-d3ab-45a2-8616-03208ee14794\") " pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:21 crc kubenswrapper[4678]: I1124 12:01:21.801618 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:22 crc kubenswrapper[4678]: I1124 12:01:22.476088 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gqf75"] Nov 24 12:01:23 crc kubenswrapper[4678]: I1124 12:01:23.146249 4678 generic.go:334] "Generic (PLEG): container finished" podID="e6e307aa-d3ab-45a2-8616-03208ee14794" containerID="2bebd2df4a8c70488cd6e7d2ceb8c51b3653d2c8a28d840713c687730a072503" exitCode=0 Nov 24 12:01:23 crc kubenswrapper[4678]: I1124 12:01:23.146301 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqf75" event={"ID":"e6e307aa-d3ab-45a2-8616-03208ee14794","Type":"ContainerDied","Data":"2bebd2df4a8c70488cd6e7d2ceb8c51b3653d2c8a28d840713c687730a072503"} Nov 24 12:01:23 crc kubenswrapper[4678]: I1124 12:01:23.146837 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqf75" event={"ID":"e6e307aa-d3ab-45a2-8616-03208ee14794","Type":"ContainerStarted","Data":"271b6120bc5a70eb9b87b4678434cc626729ab1341122c4d805b6755ff45c834"} Nov 24 12:01:24 crc kubenswrapper[4678]: I1124 12:01:24.157933 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqf75" event={"ID":"e6e307aa-d3ab-45a2-8616-03208ee14794","Type":"ContainerStarted","Data":"ccc3e5df1f0376ef91d63c05666a5db5b9e872f0398c73b91d266c0f1f425671"} Nov 24 12:01:25 crc kubenswrapper[4678]: I1124 12:01:25.170512 4678 generic.go:334] "Generic (PLEG): container finished" podID="e6e307aa-d3ab-45a2-8616-03208ee14794" containerID="ccc3e5df1f0376ef91d63c05666a5db5b9e872f0398c73b91d266c0f1f425671" exitCode=0 Nov 24 12:01:25 crc kubenswrapper[4678]: I1124 12:01:25.170602 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqf75" event={"ID":"e6e307aa-d3ab-45a2-8616-03208ee14794","Type":"ContainerDied","Data":"ccc3e5df1f0376ef91d63c05666a5db5b9e872f0398c73b91d266c0f1f425671"} Nov 24 12:01:26 crc kubenswrapper[4678]: I1124 12:01:26.186486 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqf75" event={"ID":"e6e307aa-d3ab-45a2-8616-03208ee14794","Type":"ContainerStarted","Data":"8b6146a37403842ffdde0ce3d4d28ac426c3981d29d41acedc2cb0d02792cb6f"} Nov 24 12:01:26 crc kubenswrapper[4678]: I1124 12:01:26.213651 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gqf75" podStartSLOduration=2.781427861 podStartE2EDuration="5.213617619s" podCreationTimestamp="2025-11-24 12:01:21 +0000 UTC" firstStartedPulling="2025-11-24 12:01:23.148684531 +0000 UTC m=+2694.079744170" lastFinishedPulling="2025-11-24 12:01:25.580874289 +0000 UTC m=+2696.511933928" observedRunningTime="2025-11-24 12:01:26.206103807 +0000 UTC m=+2697.137163456" watchObservedRunningTime="2025-11-24 12:01:26.213617619 +0000 UTC m=+2697.144677258" Nov 24 12:01:31 crc kubenswrapper[4678]: I1124 12:01:31.801995 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:31 crc kubenswrapper[4678]: I1124 12:01:31.803880 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:31 crc kubenswrapper[4678]: I1124 12:01:31.869955 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:32 crc kubenswrapper[4678]: I1124 12:01:32.293432 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:32 crc kubenswrapper[4678]: I1124 12:01:32.344875 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gqf75"] Nov 24 12:01:34 crc kubenswrapper[4678]: I1124 12:01:34.266481 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gqf75" podUID="e6e307aa-d3ab-45a2-8616-03208ee14794" containerName="registry-server" containerID="cri-o://8b6146a37403842ffdde0ce3d4d28ac426c3981d29d41acedc2cb0d02792cb6f" gracePeriod=2 Nov 24 12:01:34 crc kubenswrapper[4678]: I1124 12:01:34.779996 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:34 crc kubenswrapper[4678]: I1124 12:01:34.891926 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e307aa-d3ab-45a2-8616-03208ee14794-catalog-content\") pod \"e6e307aa-d3ab-45a2-8616-03208ee14794\" (UID: \"e6e307aa-d3ab-45a2-8616-03208ee14794\") " Nov 24 12:01:34 crc kubenswrapper[4678]: I1124 12:01:34.892103 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldv5c\" (UniqueName: \"kubernetes.io/projected/e6e307aa-d3ab-45a2-8616-03208ee14794-kube-api-access-ldv5c\") pod \"e6e307aa-d3ab-45a2-8616-03208ee14794\" (UID: \"e6e307aa-d3ab-45a2-8616-03208ee14794\") " Nov 24 12:01:34 crc kubenswrapper[4678]: I1124 12:01:34.892286 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e307aa-d3ab-45a2-8616-03208ee14794-utilities\") pod \"e6e307aa-d3ab-45a2-8616-03208ee14794\" (UID: \"e6e307aa-d3ab-45a2-8616-03208ee14794\") " Nov 24 12:01:34 crc kubenswrapper[4678]: I1124 12:01:34.893101 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e307aa-d3ab-45a2-8616-03208ee14794-utilities" (OuterVolumeSpecName: "utilities") pod "e6e307aa-d3ab-45a2-8616-03208ee14794" (UID: "e6e307aa-d3ab-45a2-8616-03208ee14794"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:01:34 crc kubenswrapper[4678]: I1124 12:01:34.898611 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6e307aa-d3ab-45a2-8616-03208ee14794-kube-api-access-ldv5c" (OuterVolumeSpecName: "kube-api-access-ldv5c") pod "e6e307aa-d3ab-45a2-8616-03208ee14794" (UID: "e6e307aa-d3ab-45a2-8616-03208ee14794"). InnerVolumeSpecName "kube-api-access-ldv5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:01:34 crc kubenswrapper[4678]: I1124 12:01:34.940300 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e307aa-d3ab-45a2-8616-03208ee14794-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6e307aa-d3ab-45a2-8616-03208ee14794" (UID: "e6e307aa-d3ab-45a2-8616-03208ee14794"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:01:34 crc kubenswrapper[4678]: I1124 12:01:34.995858 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e307aa-d3ab-45a2-8616-03208ee14794-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:34 crc kubenswrapper[4678]: I1124 12:01:34.995901 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e307aa-d3ab-45a2-8616-03208ee14794-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:34 crc kubenswrapper[4678]: I1124 12:01:34.995916 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldv5c\" (UniqueName: \"kubernetes.io/projected/e6e307aa-d3ab-45a2-8616-03208ee14794-kube-api-access-ldv5c\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:35 crc kubenswrapper[4678]: I1124 12:01:35.280406 4678 generic.go:334] "Generic (PLEG): container finished" podID="e6e307aa-d3ab-45a2-8616-03208ee14794" containerID="8b6146a37403842ffdde0ce3d4d28ac426c3981d29d41acedc2cb0d02792cb6f" exitCode=0 Nov 24 12:01:35 crc kubenswrapper[4678]: I1124 12:01:35.280463 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqf75" event={"ID":"e6e307aa-d3ab-45a2-8616-03208ee14794","Type":"ContainerDied","Data":"8b6146a37403842ffdde0ce3d4d28ac426c3981d29d41acedc2cb0d02792cb6f"} Nov 24 12:01:35 crc kubenswrapper[4678]: I1124 12:01:35.280513 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqf75" event={"ID":"e6e307aa-d3ab-45a2-8616-03208ee14794","Type":"ContainerDied","Data":"271b6120bc5a70eb9b87b4678434cc626729ab1341122c4d805b6755ff45c834"} Nov 24 12:01:35 crc kubenswrapper[4678]: I1124 12:01:35.280533 4678 scope.go:117] "RemoveContainer" containerID="8b6146a37403842ffdde0ce3d4d28ac426c3981d29d41acedc2cb0d02792cb6f" Nov 24 12:01:35 crc kubenswrapper[4678]: I1124 12:01:35.280525 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gqf75" Nov 24 12:01:35 crc kubenswrapper[4678]: I1124 12:01:35.319288 4678 scope.go:117] "RemoveContainer" containerID="ccc3e5df1f0376ef91d63c05666a5db5b9e872f0398c73b91d266c0f1f425671" Nov 24 12:01:35 crc kubenswrapper[4678]: I1124 12:01:35.354900 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gqf75"] Nov 24 12:01:35 crc kubenswrapper[4678]: I1124 12:01:35.358357 4678 scope.go:117] "RemoveContainer" containerID="2bebd2df4a8c70488cd6e7d2ceb8c51b3653d2c8a28d840713c687730a072503" Nov 24 12:01:35 crc kubenswrapper[4678]: I1124 12:01:35.361475 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gqf75"] Nov 24 12:01:35 crc kubenswrapper[4678]: I1124 12:01:35.427078 4678 scope.go:117] "RemoveContainer" containerID="8b6146a37403842ffdde0ce3d4d28ac426c3981d29d41acedc2cb0d02792cb6f" Nov 24 12:01:35 crc kubenswrapper[4678]: E1124 12:01:35.427479 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b6146a37403842ffdde0ce3d4d28ac426c3981d29d41acedc2cb0d02792cb6f\": container with ID starting with 8b6146a37403842ffdde0ce3d4d28ac426c3981d29d41acedc2cb0d02792cb6f not found: ID does not exist" containerID="8b6146a37403842ffdde0ce3d4d28ac426c3981d29d41acedc2cb0d02792cb6f" Nov 24 12:01:35 crc kubenswrapper[4678]: I1124 12:01:35.427515 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b6146a37403842ffdde0ce3d4d28ac426c3981d29d41acedc2cb0d02792cb6f"} err="failed to get container status \"8b6146a37403842ffdde0ce3d4d28ac426c3981d29d41acedc2cb0d02792cb6f\": rpc error: code = NotFound desc = could not find container \"8b6146a37403842ffdde0ce3d4d28ac426c3981d29d41acedc2cb0d02792cb6f\": container with ID starting with 8b6146a37403842ffdde0ce3d4d28ac426c3981d29d41acedc2cb0d02792cb6f not found: ID does not exist" Nov 24 12:01:35 crc kubenswrapper[4678]: I1124 12:01:35.427544 4678 scope.go:117] "RemoveContainer" containerID="ccc3e5df1f0376ef91d63c05666a5db5b9e872f0398c73b91d266c0f1f425671" Nov 24 12:01:35 crc kubenswrapper[4678]: E1124 12:01:35.427900 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccc3e5df1f0376ef91d63c05666a5db5b9e872f0398c73b91d266c0f1f425671\": container with ID starting with ccc3e5df1f0376ef91d63c05666a5db5b9e872f0398c73b91d266c0f1f425671 not found: ID does not exist" containerID="ccc3e5df1f0376ef91d63c05666a5db5b9e872f0398c73b91d266c0f1f425671" Nov 24 12:01:35 crc kubenswrapper[4678]: I1124 12:01:35.427930 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccc3e5df1f0376ef91d63c05666a5db5b9e872f0398c73b91d266c0f1f425671"} err="failed to get container status \"ccc3e5df1f0376ef91d63c05666a5db5b9e872f0398c73b91d266c0f1f425671\": rpc error: code = NotFound desc = could not find container \"ccc3e5df1f0376ef91d63c05666a5db5b9e872f0398c73b91d266c0f1f425671\": container with ID starting with ccc3e5df1f0376ef91d63c05666a5db5b9e872f0398c73b91d266c0f1f425671 not found: ID does not exist" Nov 24 12:01:35 crc kubenswrapper[4678]: I1124 12:01:35.427949 4678 scope.go:117] "RemoveContainer" containerID="2bebd2df4a8c70488cd6e7d2ceb8c51b3653d2c8a28d840713c687730a072503" Nov 24 12:01:35 crc kubenswrapper[4678]: E1124 12:01:35.428244 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bebd2df4a8c70488cd6e7d2ceb8c51b3653d2c8a28d840713c687730a072503\": container with ID starting with 2bebd2df4a8c70488cd6e7d2ceb8c51b3653d2c8a28d840713c687730a072503 not found: ID does not exist" containerID="2bebd2df4a8c70488cd6e7d2ceb8c51b3653d2c8a28d840713c687730a072503" Nov 24 12:01:35 crc kubenswrapper[4678]: I1124 12:01:35.428268 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bebd2df4a8c70488cd6e7d2ceb8c51b3653d2c8a28d840713c687730a072503"} err="failed to get container status \"2bebd2df4a8c70488cd6e7d2ceb8c51b3653d2c8a28d840713c687730a072503\": rpc error: code = NotFound desc = could not find container \"2bebd2df4a8c70488cd6e7d2ceb8c51b3653d2c8a28d840713c687730a072503\": container with ID starting with 2bebd2df4a8c70488cd6e7d2ceb8c51b3653d2c8a28d840713c687730a072503 not found: ID does not exist" Nov 24 12:01:35 crc kubenswrapper[4678]: I1124 12:01:35.910005 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6e307aa-d3ab-45a2-8616-03208ee14794" path="/var/lib/kubelet/pods/e6e307aa-d3ab-45a2-8616-03208ee14794/volumes" Nov 24 12:01:42 crc kubenswrapper[4678]: I1124 12:01:42.372474 4678 generic.go:334] "Generic (PLEG): container finished" podID="23808fd9-feff-4e7c-835e-dd9658816050" containerID="4e7739be64f0e8f072da5656ae95262f8be8f179c744dba4204e7ec78fd45594" exitCode=0 Nov 24 12:01:42 crc kubenswrapper[4678]: I1124 12:01:42.373074 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" event={"ID":"23808fd9-feff-4e7c-835e-dd9658816050","Type":"ContainerDied","Data":"4e7739be64f0e8f072da5656ae95262f8be8f179c744dba4204e7ec78fd45594"} Nov 24 12:01:43 crc kubenswrapper[4678]: I1124 12:01:43.947876 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.049913 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-cell1-compute-config-1\") pod \"23808fd9-feff-4e7c-835e-dd9658816050\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.050581 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-inventory\") pod \"23808fd9-feff-4e7c-835e-dd9658816050\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.050711 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-migration-ssh-key-1\") pod \"23808fd9-feff-4e7c-835e-dd9658816050\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.050828 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-ssh-key\") pod \"23808fd9-feff-4e7c-835e-dd9658816050\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.050887 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-combined-ca-bundle\") pod \"23808fd9-feff-4e7c-835e-dd9658816050\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.050960 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-migration-ssh-key-0\") pod \"23808fd9-feff-4e7c-835e-dd9658816050\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.050994 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/23808fd9-feff-4e7c-835e-dd9658816050-nova-extra-config-0\") pod \"23808fd9-feff-4e7c-835e-dd9658816050\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.051037 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8dmw\" (UniqueName: \"kubernetes.io/projected/23808fd9-feff-4e7c-835e-dd9658816050-kube-api-access-w8dmw\") pod \"23808fd9-feff-4e7c-835e-dd9658816050\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.051085 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-cell1-compute-config-0\") pod \"23808fd9-feff-4e7c-835e-dd9658816050\" (UID: \"23808fd9-feff-4e7c-835e-dd9658816050\") " Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.059447 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "23808fd9-feff-4e7c-835e-dd9658816050" (UID: "23808fd9-feff-4e7c-835e-dd9658816050"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.075420 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23808fd9-feff-4e7c-835e-dd9658816050-kube-api-access-w8dmw" (OuterVolumeSpecName: "kube-api-access-w8dmw") pod "23808fd9-feff-4e7c-835e-dd9658816050" (UID: "23808fd9-feff-4e7c-835e-dd9658816050"). InnerVolumeSpecName "kube-api-access-w8dmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.088224 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "23808fd9-feff-4e7c-835e-dd9658816050" (UID: "23808fd9-feff-4e7c-835e-dd9658816050"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.090313 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-inventory" (OuterVolumeSpecName: "inventory") pod "23808fd9-feff-4e7c-835e-dd9658816050" (UID: "23808fd9-feff-4e7c-835e-dd9658816050"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.090872 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "23808fd9-feff-4e7c-835e-dd9658816050" (UID: "23808fd9-feff-4e7c-835e-dd9658816050"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.091420 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "23808fd9-feff-4e7c-835e-dd9658816050" (UID: "23808fd9-feff-4e7c-835e-dd9658816050"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.103991 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23808fd9-feff-4e7c-835e-dd9658816050-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "23808fd9-feff-4e7c-835e-dd9658816050" (UID: "23808fd9-feff-4e7c-835e-dd9658816050"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.107711 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "23808fd9-feff-4e7c-835e-dd9658816050" (UID: "23808fd9-feff-4e7c-835e-dd9658816050"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.109434 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "23808fd9-feff-4e7c-835e-dd9658816050" (UID: "23808fd9-feff-4e7c-835e-dd9658816050"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.154867 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.155110 4678 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.155190 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.155242 4678 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.155292 4678 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.155342 4678 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/23808fd9-feff-4e7c-835e-dd9658816050-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.155400 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8dmw\" (UniqueName: \"kubernetes.io/projected/23808fd9-feff-4e7c-835e-dd9658816050-kube-api-access-w8dmw\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.155456 4678 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.155505 4678 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/23808fd9-feff-4e7c-835e-dd9658816050-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.395065 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" event={"ID":"23808fd9-feff-4e7c-835e-dd9658816050","Type":"ContainerDied","Data":"7ecc9fa4afde720408ce65d3fc3ae6b09bf6b38bdf4bf3b8fd908a87ff6b0d86"} Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.395113 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p2rvt" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.395130 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ecc9fa4afde720408ce65d3fc3ae6b09bf6b38bdf4bf3b8fd908a87ff6b0d86" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.499101 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn"] Nov 24 12:01:44 crc kubenswrapper[4678]: E1124 12:01:44.499611 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e307aa-d3ab-45a2-8616-03208ee14794" containerName="extract-utilities" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.499633 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e307aa-d3ab-45a2-8616-03208ee14794" containerName="extract-utilities" Nov 24 12:01:44 crc kubenswrapper[4678]: E1124 12:01:44.499655 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23808fd9-feff-4e7c-835e-dd9658816050" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.499664 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="23808fd9-feff-4e7c-835e-dd9658816050" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 12:01:44 crc kubenswrapper[4678]: E1124 12:01:44.499697 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e307aa-d3ab-45a2-8616-03208ee14794" containerName="extract-content" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.499704 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e307aa-d3ab-45a2-8616-03208ee14794" containerName="extract-content" Nov 24 12:01:44 crc kubenswrapper[4678]: E1124 12:01:44.499748 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e307aa-d3ab-45a2-8616-03208ee14794" containerName="registry-server" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.499755 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e307aa-d3ab-45a2-8616-03208ee14794" containerName="registry-server" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.499989 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="23808fd9-feff-4e7c-835e-dd9658816050" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.500009 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e307aa-d3ab-45a2-8616-03208ee14794" containerName="registry-server" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.501116 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.503831 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.504042 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.506706 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.506876 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.507159 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.518374 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn"] Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.564285 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9wkl\" (UniqueName: \"kubernetes.io/projected/8106bb6e-2abf-42db-8e44-80656738e917-kube-api-access-c9wkl\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.564344 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.564419 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.564456 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.564476 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.564497 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.564517 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.666959 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.667337 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.667367 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.667397 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.667420 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.667658 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9wkl\" (UniqueName: \"kubernetes.io/projected/8106bb6e-2abf-42db-8e44-80656738e917-kube-api-access-c9wkl\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.667753 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.673807 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.674052 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.674118 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.674211 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.678149 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.684424 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.689183 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9wkl\" (UniqueName: \"kubernetes.io/projected/8106bb6e-2abf-42db-8e44-80656738e917-kube-api-access-c9wkl\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:44 crc kubenswrapper[4678]: I1124 12:01:44.822485 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:01:45 crc kubenswrapper[4678]: I1124 12:01:45.552856 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn"] Nov 24 12:01:46 crc kubenswrapper[4678]: I1124 12:01:46.418188 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" event={"ID":"8106bb6e-2abf-42db-8e44-80656738e917","Type":"ContainerStarted","Data":"19f2e8ad020379c5befe77913f88fae76cd4c3645da245047b2fc4e8d514427a"} Nov 24 12:01:46 crc kubenswrapper[4678]: I1124 12:01:46.418709 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" event={"ID":"8106bb6e-2abf-42db-8e44-80656738e917","Type":"ContainerStarted","Data":"ef7a158f8a9db234cac5c7ce0b7371dc3850fa95360a9768e4f4ce49b3c1b981"} Nov 24 12:01:46 crc kubenswrapper[4678]: I1124 12:01:46.444868 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" podStartSLOduration=2.02344032 podStartE2EDuration="2.444842652s" podCreationTimestamp="2025-11-24 12:01:44 +0000 UTC" firstStartedPulling="2025-11-24 12:01:45.564472815 +0000 UTC m=+2716.495532454" lastFinishedPulling="2025-11-24 12:01:45.985875157 +0000 UTC m=+2716.916934786" observedRunningTime="2025-11-24 12:01:46.434768521 +0000 UTC m=+2717.365828150" watchObservedRunningTime="2025-11-24 12:01:46.444842652 +0000 UTC m=+2717.375902291" Nov 24 12:01:52 crc kubenswrapper[4678]: I1124 12:01:52.664494 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dv6k9"] Nov 24 12:01:52 crc kubenswrapper[4678]: I1124 12:01:52.667345 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:01:52 crc kubenswrapper[4678]: I1124 12:01:52.679587 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dv6k9"] Nov 24 12:01:52 crc kubenswrapper[4678]: I1124 12:01:52.773115 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba0e2296-1df4-4777-a5b1-17e683d590ca-utilities\") pod \"community-operators-dv6k9\" (UID: \"ba0e2296-1df4-4777-a5b1-17e683d590ca\") " pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:01:52 crc kubenswrapper[4678]: I1124 12:01:52.773163 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba0e2296-1df4-4777-a5b1-17e683d590ca-catalog-content\") pod \"community-operators-dv6k9\" (UID: \"ba0e2296-1df4-4777-a5b1-17e683d590ca\") " pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:01:52 crc kubenswrapper[4678]: I1124 12:01:52.773340 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62cdp\" (UniqueName: \"kubernetes.io/projected/ba0e2296-1df4-4777-a5b1-17e683d590ca-kube-api-access-62cdp\") pod \"community-operators-dv6k9\" (UID: \"ba0e2296-1df4-4777-a5b1-17e683d590ca\") " pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:01:52 crc kubenswrapper[4678]: I1124 12:01:52.875557 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62cdp\" (UniqueName: \"kubernetes.io/projected/ba0e2296-1df4-4777-a5b1-17e683d590ca-kube-api-access-62cdp\") pod \"community-operators-dv6k9\" (UID: \"ba0e2296-1df4-4777-a5b1-17e683d590ca\") " pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:01:52 crc kubenswrapper[4678]: I1124 12:01:52.875736 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba0e2296-1df4-4777-a5b1-17e683d590ca-utilities\") pod \"community-operators-dv6k9\" (UID: \"ba0e2296-1df4-4777-a5b1-17e683d590ca\") " pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:01:52 crc kubenswrapper[4678]: I1124 12:01:52.875758 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba0e2296-1df4-4777-a5b1-17e683d590ca-catalog-content\") pod \"community-operators-dv6k9\" (UID: \"ba0e2296-1df4-4777-a5b1-17e683d590ca\") " pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:01:52 crc kubenswrapper[4678]: I1124 12:01:52.876428 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba0e2296-1df4-4777-a5b1-17e683d590ca-utilities\") pod \"community-operators-dv6k9\" (UID: \"ba0e2296-1df4-4777-a5b1-17e683d590ca\") " pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:01:52 crc kubenswrapper[4678]: I1124 12:01:52.876902 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba0e2296-1df4-4777-a5b1-17e683d590ca-catalog-content\") pod \"community-operators-dv6k9\" (UID: \"ba0e2296-1df4-4777-a5b1-17e683d590ca\") " pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:01:52 crc kubenswrapper[4678]: I1124 12:01:52.901856 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62cdp\" (UniqueName: \"kubernetes.io/projected/ba0e2296-1df4-4777-a5b1-17e683d590ca-kube-api-access-62cdp\") pod \"community-operators-dv6k9\" (UID: \"ba0e2296-1df4-4777-a5b1-17e683d590ca\") " pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:01:53 crc kubenswrapper[4678]: I1124 12:01:53.013542 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:01:53 crc kubenswrapper[4678]: I1124 12:01:53.638041 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dv6k9"] Nov 24 12:01:54 crc kubenswrapper[4678]: I1124 12:01:54.490486 4678 generic.go:334] "Generic (PLEG): container finished" podID="ba0e2296-1df4-4777-a5b1-17e683d590ca" containerID="b885de3b2e4d7a6f2e92defa484b027af63bb86348d3b8685a4ca98a5a3cdc3a" exitCode=0 Nov 24 12:01:54 crc kubenswrapper[4678]: I1124 12:01:54.490530 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dv6k9" event={"ID":"ba0e2296-1df4-4777-a5b1-17e683d590ca","Type":"ContainerDied","Data":"b885de3b2e4d7a6f2e92defa484b027af63bb86348d3b8685a4ca98a5a3cdc3a"} Nov 24 12:01:54 crc kubenswrapper[4678]: I1124 12:01:54.490555 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dv6k9" event={"ID":"ba0e2296-1df4-4777-a5b1-17e683d590ca","Type":"ContainerStarted","Data":"9f768089eeb6bd1af8b6ed543a4352cc6d9444eb0b3c1fe253b2881c7ce652fe"} Nov 24 12:01:55 crc kubenswrapper[4678]: I1124 12:01:55.505020 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dv6k9" event={"ID":"ba0e2296-1df4-4777-a5b1-17e683d590ca","Type":"ContainerStarted","Data":"894ef09ef26dbbeacf92e1edb00031bcbde131e8f64b66734773b004beeaff69"} Nov 24 12:01:57 crc kubenswrapper[4678]: I1124 12:01:57.526466 4678 generic.go:334] "Generic (PLEG): container finished" podID="ba0e2296-1df4-4777-a5b1-17e683d590ca" containerID="894ef09ef26dbbeacf92e1edb00031bcbde131e8f64b66734773b004beeaff69" exitCode=0 Nov 24 12:01:57 crc kubenswrapper[4678]: I1124 12:01:57.526527 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dv6k9" event={"ID":"ba0e2296-1df4-4777-a5b1-17e683d590ca","Type":"ContainerDied","Data":"894ef09ef26dbbeacf92e1edb00031bcbde131e8f64b66734773b004beeaff69"} Nov 24 12:01:58 crc kubenswrapper[4678]: I1124 12:01:58.541393 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dv6k9" event={"ID":"ba0e2296-1df4-4777-a5b1-17e683d590ca","Type":"ContainerStarted","Data":"f5f1b5bab57cb36ca2cc2249d85244f26f6936b7c9a6be43b44b46de9b9bb60b"} Nov 24 12:01:58 crc kubenswrapper[4678]: I1124 12:01:58.567057 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dv6k9" podStartSLOduration=2.965790203 podStartE2EDuration="6.567035828s" podCreationTimestamp="2025-11-24 12:01:52 +0000 UTC" firstStartedPulling="2025-11-24 12:01:54.495401679 +0000 UTC m=+2725.426461318" lastFinishedPulling="2025-11-24 12:01:58.096647304 +0000 UTC m=+2729.027706943" observedRunningTime="2025-11-24 12:01:58.563003959 +0000 UTC m=+2729.494063598" watchObservedRunningTime="2025-11-24 12:01:58.567035828 +0000 UTC m=+2729.498095467" Nov 24 12:02:03 crc kubenswrapper[4678]: I1124 12:02:03.015564 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:02:03 crc kubenswrapper[4678]: I1124 12:02:03.016975 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:02:03 crc kubenswrapper[4678]: I1124 12:02:03.065342 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:02:03 crc kubenswrapper[4678]: I1124 12:02:03.681737 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:02:03 crc kubenswrapper[4678]: I1124 12:02:03.734346 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dv6k9"] Nov 24 12:02:05 crc kubenswrapper[4678]: I1124 12:02:05.643534 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dv6k9" podUID="ba0e2296-1df4-4777-a5b1-17e683d590ca" containerName="registry-server" containerID="cri-o://f5f1b5bab57cb36ca2cc2249d85244f26f6936b7c9a6be43b44b46de9b9bb60b" gracePeriod=2 Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.335991 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.436704 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62cdp\" (UniqueName: \"kubernetes.io/projected/ba0e2296-1df4-4777-a5b1-17e683d590ca-kube-api-access-62cdp\") pod \"ba0e2296-1df4-4777-a5b1-17e683d590ca\" (UID: \"ba0e2296-1df4-4777-a5b1-17e683d590ca\") " Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.437061 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba0e2296-1df4-4777-a5b1-17e683d590ca-catalog-content\") pod \"ba0e2296-1df4-4777-a5b1-17e683d590ca\" (UID: \"ba0e2296-1df4-4777-a5b1-17e683d590ca\") " Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.437101 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba0e2296-1df4-4777-a5b1-17e683d590ca-utilities\") pod \"ba0e2296-1df4-4777-a5b1-17e683d590ca\" (UID: \"ba0e2296-1df4-4777-a5b1-17e683d590ca\") " Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.441505 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba0e2296-1df4-4777-a5b1-17e683d590ca-utilities" (OuterVolumeSpecName: "utilities") pod "ba0e2296-1df4-4777-a5b1-17e683d590ca" (UID: "ba0e2296-1df4-4777-a5b1-17e683d590ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.444818 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba0e2296-1df4-4777-a5b1-17e683d590ca-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.448935 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba0e2296-1df4-4777-a5b1-17e683d590ca-kube-api-access-62cdp" (OuterVolumeSpecName: "kube-api-access-62cdp") pod "ba0e2296-1df4-4777-a5b1-17e683d590ca" (UID: "ba0e2296-1df4-4777-a5b1-17e683d590ca"). InnerVolumeSpecName "kube-api-access-62cdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.494191 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba0e2296-1df4-4777-a5b1-17e683d590ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba0e2296-1df4-4777-a5b1-17e683d590ca" (UID: "ba0e2296-1df4-4777-a5b1-17e683d590ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.547889 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba0e2296-1df4-4777-a5b1-17e683d590ca-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.547934 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62cdp\" (UniqueName: \"kubernetes.io/projected/ba0e2296-1df4-4777-a5b1-17e683d590ca-kube-api-access-62cdp\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.659228 4678 generic.go:334] "Generic (PLEG): container finished" podID="ba0e2296-1df4-4777-a5b1-17e683d590ca" containerID="f5f1b5bab57cb36ca2cc2249d85244f26f6936b7c9a6be43b44b46de9b9bb60b" exitCode=0 Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.659298 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dv6k9" event={"ID":"ba0e2296-1df4-4777-a5b1-17e683d590ca","Type":"ContainerDied","Data":"f5f1b5bab57cb36ca2cc2249d85244f26f6936b7c9a6be43b44b46de9b9bb60b"} Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.659334 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dv6k9" Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.659367 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dv6k9" event={"ID":"ba0e2296-1df4-4777-a5b1-17e683d590ca","Type":"ContainerDied","Data":"9f768089eeb6bd1af8b6ed543a4352cc6d9444eb0b3c1fe253b2881c7ce652fe"} Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.659392 4678 scope.go:117] "RemoveContainer" containerID="f5f1b5bab57cb36ca2cc2249d85244f26f6936b7c9a6be43b44b46de9b9bb60b" Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.688251 4678 scope.go:117] "RemoveContainer" containerID="894ef09ef26dbbeacf92e1edb00031bcbde131e8f64b66734773b004beeaff69" Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.702660 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dv6k9"] Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.714445 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dv6k9"] Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.726823 4678 scope.go:117] "RemoveContainer" containerID="b885de3b2e4d7a6f2e92defa484b027af63bb86348d3b8685a4ca98a5a3cdc3a" Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.785950 4678 scope.go:117] "RemoveContainer" containerID="f5f1b5bab57cb36ca2cc2249d85244f26f6936b7c9a6be43b44b46de9b9bb60b" Nov 24 12:02:06 crc kubenswrapper[4678]: E1124 12:02:06.786621 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5f1b5bab57cb36ca2cc2249d85244f26f6936b7c9a6be43b44b46de9b9bb60b\": container with ID starting with f5f1b5bab57cb36ca2cc2249d85244f26f6936b7c9a6be43b44b46de9b9bb60b not found: ID does not exist" containerID="f5f1b5bab57cb36ca2cc2249d85244f26f6936b7c9a6be43b44b46de9b9bb60b" Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.786734 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5f1b5bab57cb36ca2cc2249d85244f26f6936b7c9a6be43b44b46de9b9bb60b"} err="failed to get container status \"f5f1b5bab57cb36ca2cc2249d85244f26f6936b7c9a6be43b44b46de9b9bb60b\": rpc error: code = NotFound desc = could not find container \"f5f1b5bab57cb36ca2cc2249d85244f26f6936b7c9a6be43b44b46de9b9bb60b\": container with ID starting with f5f1b5bab57cb36ca2cc2249d85244f26f6936b7c9a6be43b44b46de9b9bb60b not found: ID does not exist" Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.786781 4678 scope.go:117] "RemoveContainer" containerID="894ef09ef26dbbeacf92e1edb00031bcbde131e8f64b66734773b004beeaff69" Nov 24 12:02:06 crc kubenswrapper[4678]: E1124 12:02:06.787545 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"894ef09ef26dbbeacf92e1edb00031bcbde131e8f64b66734773b004beeaff69\": container with ID starting with 894ef09ef26dbbeacf92e1edb00031bcbde131e8f64b66734773b004beeaff69 not found: ID does not exist" containerID="894ef09ef26dbbeacf92e1edb00031bcbde131e8f64b66734773b004beeaff69" Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.787597 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"894ef09ef26dbbeacf92e1edb00031bcbde131e8f64b66734773b004beeaff69"} err="failed to get container status \"894ef09ef26dbbeacf92e1edb00031bcbde131e8f64b66734773b004beeaff69\": rpc error: code = NotFound desc = could not find container \"894ef09ef26dbbeacf92e1edb00031bcbde131e8f64b66734773b004beeaff69\": container with ID starting with 894ef09ef26dbbeacf92e1edb00031bcbde131e8f64b66734773b004beeaff69 not found: ID does not exist" Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.787637 4678 scope.go:117] "RemoveContainer" containerID="b885de3b2e4d7a6f2e92defa484b027af63bb86348d3b8685a4ca98a5a3cdc3a" Nov 24 12:02:06 crc kubenswrapper[4678]: E1124 12:02:06.788041 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b885de3b2e4d7a6f2e92defa484b027af63bb86348d3b8685a4ca98a5a3cdc3a\": container with ID starting with b885de3b2e4d7a6f2e92defa484b027af63bb86348d3b8685a4ca98a5a3cdc3a not found: ID does not exist" containerID="b885de3b2e4d7a6f2e92defa484b027af63bb86348d3b8685a4ca98a5a3cdc3a" Nov 24 12:02:06 crc kubenswrapper[4678]: I1124 12:02:06.788076 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b885de3b2e4d7a6f2e92defa484b027af63bb86348d3b8685a4ca98a5a3cdc3a"} err="failed to get container status \"b885de3b2e4d7a6f2e92defa484b027af63bb86348d3b8685a4ca98a5a3cdc3a\": rpc error: code = NotFound desc = could not find container \"b885de3b2e4d7a6f2e92defa484b027af63bb86348d3b8685a4ca98a5a3cdc3a\": container with ID starting with b885de3b2e4d7a6f2e92defa484b027af63bb86348d3b8685a4ca98a5a3cdc3a not found: ID does not exist" Nov 24 12:02:07 crc kubenswrapper[4678]: I1124 12:02:07.911037 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba0e2296-1df4-4777-a5b1-17e683d590ca" path="/var/lib/kubelet/pods/ba0e2296-1df4-4777-a5b1-17e683d590ca/volumes" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.315125 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-th6lm"] Nov 24 12:02:19 crc kubenswrapper[4678]: E1124 12:02:19.316145 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba0e2296-1df4-4777-a5b1-17e683d590ca" containerName="extract-content" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.316162 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba0e2296-1df4-4777-a5b1-17e683d590ca" containerName="extract-content" Nov 24 12:02:19 crc kubenswrapper[4678]: E1124 12:02:19.316178 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba0e2296-1df4-4777-a5b1-17e683d590ca" containerName="extract-utilities" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.316185 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba0e2296-1df4-4777-a5b1-17e683d590ca" containerName="extract-utilities" Nov 24 12:02:19 crc kubenswrapper[4678]: E1124 12:02:19.316195 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba0e2296-1df4-4777-a5b1-17e683d590ca" containerName="registry-server" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.316200 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba0e2296-1df4-4777-a5b1-17e683d590ca" containerName="registry-server" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.331857 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba0e2296-1df4-4777-a5b1-17e683d590ca" containerName="registry-server" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.343628 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.353880 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-th6lm"] Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.385316 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k7m4\" (UniqueName: \"kubernetes.io/projected/f7a6e2b2-5559-4f5f-8602-605febf66fee-kube-api-access-4k7m4\") pod \"redhat-marketplace-th6lm\" (UID: \"f7a6e2b2-5559-4f5f-8602-605febf66fee\") " pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.385402 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7a6e2b2-5559-4f5f-8602-605febf66fee-catalog-content\") pod \"redhat-marketplace-th6lm\" (UID: \"f7a6e2b2-5559-4f5f-8602-605febf66fee\") " pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.385493 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7a6e2b2-5559-4f5f-8602-605febf66fee-utilities\") pod \"redhat-marketplace-th6lm\" (UID: \"f7a6e2b2-5559-4f5f-8602-605febf66fee\") " pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.504776 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7a6e2b2-5559-4f5f-8602-605febf66fee-catalog-content\") pod \"redhat-marketplace-th6lm\" (UID: \"f7a6e2b2-5559-4f5f-8602-605febf66fee\") " pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.504954 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7a6e2b2-5559-4f5f-8602-605febf66fee-utilities\") pod \"redhat-marketplace-th6lm\" (UID: \"f7a6e2b2-5559-4f5f-8602-605febf66fee\") " pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.505516 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k7m4\" (UniqueName: \"kubernetes.io/projected/f7a6e2b2-5559-4f5f-8602-605febf66fee-kube-api-access-4k7m4\") pod \"redhat-marketplace-th6lm\" (UID: \"f7a6e2b2-5559-4f5f-8602-605febf66fee\") " pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.506406 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7a6e2b2-5559-4f5f-8602-605febf66fee-catalog-content\") pod \"redhat-marketplace-th6lm\" (UID: \"f7a6e2b2-5559-4f5f-8602-605febf66fee\") " pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.506703 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7a6e2b2-5559-4f5f-8602-605febf66fee-utilities\") pod \"redhat-marketplace-th6lm\" (UID: \"f7a6e2b2-5559-4f5f-8602-605febf66fee\") " pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.520645 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fz2vh"] Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.523325 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.556500 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fz2vh"] Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.566865 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k7m4\" (UniqueName: \"kubernetes.io/projected/f7a6e2b2-5559-4f5f-8602-605febf66fee-kube-api-access-4k7m4\") pod \"redhat-marketplace-th6lm\" (UID: \"f7a6e2b2-5559-4f5f-8602-605febf66fee\") " pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.610232 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-utilities\") pod \"redhat-operators-fz2vh\" (UID: \"7fb8a76b-e1df-41d7-b607-2f6014b7d25c\") " pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.610332 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-catalog-content\") pod \"redhat-operators-fz2vh\" (UID: \"7fb8a76b-e1df-41d7-b607-2f6014b7d25c\") " pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.610441 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zg7q\" (UniqueName: \"kubernetes.io/projected/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-kube-api-access-9zg7q\") pod \"redhat-operators-fz2vh\" (UID: \"7fb8a76b-e1df-41d7-b607-2f6014b7d25c\") " pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.680280 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.712487 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-utilities\") pod \"redhat-operators-fz2vh\" (UID: \"7fb8a76b-e1df-41d7-b607-2f6014b7d25c\") " pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.712577 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-catalog-content\") pod \"redhat-operators-fz2vh\" (UID: \"7fb8a76b-e1df-41d7-b607-2f6014b7d25c\") " pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.712698 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zg7q\" (UniqueName: \"kubernetes.io/projected/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-kube-api-access-9zg7q\") pod \"redhat-operators-fz2vh\" (UID: \"7fb8a76b-e1df-41d7-b607-2f6014b7d25c\") " pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.714012 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-utilities\") pod \"redhat-operators-fz2vh\" (UID: \"7fb8a76b-e1df-41d7-b607-2f6014b7d25c\") " pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.714286 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-catalog-content\") pod \"redhat-operators-fz2vh\" (UID: \"7fb8a76b-e1df-41d7-b607-2f6014b7d25c\") " pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.787715 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zg7q\" (UniqueName: \"kubernetes.io/projected/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-kube-api-access-9zg7q\") pod \"redhat-operators-fz2vh\" (UID: \"7fb8a76b-e1df-41d7-b607-2f6014b7d25c\") " pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:19 crc kubenswrapper[4678]: I1124 12:02:19.851400 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:20 crc kubenswrapper[4678]: I1124 12:02:20.550944 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-th6lm"] Nov 24 12:02:20 crc kubenswrapper[4678]: W1124 12:02:20.737452 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fb8a76b_e1df_41d7_b607_2f6014b7d25c.slice/crio-425639a5d39dc51a5d55c86946312656023aac25b7f47ccf6970b22f3646a5a6 WatchSource:0}: Error finding container 425639a5d39dc51a5d55c86946312656023aac25b7f47ccf6970b22f3646a5a6: Status 404 returned error can't find the container with id 425639a5d39dc51a5d55c86946312656023aac25b7f47ccf6970b22f3646a5a6 Nov 24 12:02:20 crc kubenswrapper[4678]: I1124 12:02:20.738794 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fz2vh"] Nov 24 12:02:20 crc kubenswrapper[4678]: I1124 12:02:20.842812 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-th6lm" event={"ID":"f7a6e2b2-5559-4f5f-8602-605febf66fee","Type":"ContainerStarted","Data":"825b9bc3eedbcf66789b24474d9c244282276cba6ec899bade9aa3cb64ff5ba3"} Nov 24 12:02:20 crc kubenswrapper[4678]: I1124 12:02:20.842869 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-th6lm" event={"ID":"f7a6e2b2-5559-4f5f-8602-605febf66fee","Type":"ContainerStarted","Data":"478584b68a024d3b9126cc7fbc0e9ccfe81b5d74b5ebdd4f3b858d65649093a2"} Nov 24 12:02:20 crc kubenswrapper[4678]: I1124 12:02:20.846549 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz2vh" event={"ID":"7fb8a76b-e1df-41d7-b607-2f6014b7d25c","Type":"ContainerStarted","Data":"425639a5d39dc51a5d55c86946312656023aac25b7f47ccf6970b22f3646a5a6"} Nov 24 12:02:21 crc kubenswrapper[4678]: I1124 12:02:21.869476 4678 generic.go:334] "Generic (PLEG): container finished" podID="f7a6e2b2-5559-4f5f-8602-605febf66fee" containerID="825b9bc3eedbcf66789b24474d9c244282276cba6ec899bade9aa3cb64ff5ba3" exitCode=0 Nov 24 12:02:21 crc kubenswrapper[4678]: I1124 12:02:21.869734 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-th6lm" event={"ID":"f7a6e2b2-5559-4f5f-8602-605febf66fee","Type":"ContainerDied","Data":"825b9bc3eedbcf66789b24474d9c244282276cba6ec899bade9aa3cb64ff5ba3"} Nov 24 12:02:21 crc kubenswrapper[4678]: I1124 12:02:21.873962 4678 generic.go:334] "Generic (PLEG): container finished" podID="7fb8a76b-e1df-41d7-b607-2f6014b7d25c" containerID="cf73c078ba932c961972c596ead50ec2ec96076f40380fb13f21ec9be60e6c88" exitCode=0 Nov 24 12:02:21 crc kubenswrapper[4678]: I1124 12:02:21.874012 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz2vh" event={"ID":"7fb8a76b-e1df-41d7-b607-2f6014b7d25c","Type":"ContainerDied","Data":"cf73c078ba932c961972c596ead50ec2ec96076f40380fb13f21ec9be60e6c88"} Nov 24 12:02:23 crc kubenswrapper[4678]: I1124 12:02:23.936013 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-th6lm" event={"ID":"f7a6e2b2-5559-4f5f-8602-605febf66fee","Type":"ContainerStarted","Data":"9c3eaf70380dc03d9e3e2cebee01d50241ccd32524aa72af3122215352e1ee7b"} Nov 24 12:02:23 crc kubenswrapper[4678]: I1124 12:02:23.938800 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz2vh" event={"ID":"7fb8a76b-e1df-41d7-b607-2f6014b7d25c","Type":"ContainerStarted","Data":"c37ee3afd861d9b5fca8440f1ea996e8608219d7ecf7f51008cd56c4511e4b48"} Nov 24 12:02:24 crc kubenswrapper[4678]: I1124 12:02:24.953046 4678 generic.go:334] "Generic (PLEG): container finished" podID="f7a6e2b2-5559-4f5f-8602-605febf66fee" containerID="9c3eaf70380dc03d9e3e2cebee01d50241ccd32524aa72af3122215352e1ee7b" exitCode=0 Nov 24 12:02:24 crc kubenswrapper[4678]: I1124 12:02:24.953123 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-th6lm" event={"ID":"f7a6e2b2-5559-4f5f-8602-605febf66fee","Type":"ContainerDied","Data":"9c3eaf70380dc03d9e3e2cebee01d50241ccd32524aa72af3122215352e1ee7b"} Nov 24 12:02:26 crc kubenswrapper[4678]: I1124 12:02:26.981937 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-th6lm" event={"ID":"f7a6e2b2-5559-4f5f-8602-605febf66fee","Type":"ContainerStarted","Data":"624299ddf29af0a8d61df777ed87ad9310f9bc519a941f1f0c96971ce383e47f"} Nov 24 12:02:27 crc kubenswrapper[4678]: I1124 12:02:27.010261 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-th6lm" podStartSLOduration=3.889834907 podStartE2EDuration="8.010242141s" podCreationTimestamp="2025-11-24 12:02:19 +0000 UTC" firstStartedPulling="2025-11-24 12:02:21.872239408 +0000 UTC m=+2752.803299047" lastFinishedPulling="2025-11-24 12:02:25.992646642 +0000 UTC m=+2756.923706281" observedRunningTime="2025-11-24 12:02:27.001789564 +0000 UTC m=+2757.932849223" watchObservedRunningTime="2025-11-24 12:02:27.010242141 +0000 UTC m=+2757.941301780" Nov 24 12:02:29 crc kubenswrapper[4678]: I1124 12:02:29.681903 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:29 crc kubenswrapper[4678]: I1124 12:02:29.682951 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:29 crc kubenswrapper[4678]: I1124 12:02:29.753657 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:32 crc kubenswrapper[4678]: I1124 12:02:32.035014 4678 generic.go:334] "Generic (PLEG): container finished" podID="7fb8a76b-e1df-41d7-b607-2f6014b7d25c" containerID="c37ee3afd861d9b5fca8440f1ea996e8608219d7ecf7f51008cd56c4511e4b48" exitCode=0 Nov 24 12:02:32 crc kubenswrapper[4678]: I1124 12:02:32.035105 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz2vh" event={"ID":"7fb8a76b-e1df-41d7-b607-2f6014b7d25c","Type":"ContainerDied","Data":"c37ee3afd861d9b5fca8440f1ea996e8608219d7ecf7f51008cd56c4511e4b48"} Nov 24 12:02:34 crc kubenswrapper[4678]: I1124 12:02:34.059856 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz2vh" event={"ID":"7fb8a76b-e1df-41d7-b607-2f6014b7d25c","Type":"ContainerStarted","Data":"128383dd11a91b813b3cc8ac4fd94170665bfb01ede3e63f7871adb02e778fbf"} Nov 24 12:02:34 crc kubenswrapper[4678]: I1124 12:02:34.093134 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fz2vh" podStartSLOduration=3.93854146 podStartE2EDuration="15.093109607s" podCreationTimestamp="2025-11-24 12:02:19 +0000 UTC" firstStartedPulling="2025-11-24 12:02:21.876258336 +0000 UTC m=+2752.807317975" lastFinishedPulling="2025-11-24 12:02:33.030826483 +0000 UTC m=+2763.961886122" observedRunningTime="2025-11-24 12:02:34.081650227 +0000 UTC m=+2765.012709876" watchObservedRunningTime="2025-11-24 12:02:34.093109607 +0000 UTC m=+2765.024169246" Nov 24 12:02:39 crc kubenswrapper[4678]: I1124 12:02:39.739731 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:39 crc kubenswrapper[4678]: I1124 12:02:39.797531 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-th6lm"] Nov 24 12:02:39 crc kubenswrapper[4678]: I1124 12:02:39.856381 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:39 crc kubenswrapper[4678]: I1124 12:02:39.856432 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:40 crc kubenswrapper[4678]: I1124 12:02:40.121448 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-th6lm" podUID="f7a6e2b2-5559-4f5f-8602-605febf66fee" containerName="registry-server" containerID="cri-o://624299ddf29af0a8d61df777ed87ad9310f9bc519a941f1f0c96971ce383e47f" gracePeriod=2 Nov 24 12:02:40 crc kubenswrapper[4678]: I1124 12:02:40.912748 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fz2vh" podUID="7fb8a76b-e1df-41d7-b607-2f6014b7d25c" containerName="registry-server" probeResult="failure" output=< Nov 24 12:02:40 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:02:40 crc kubenswrapper[4678]: > Nov 24 12:02:41 crc kubenswrapper[4678]: I1124 12:02:41.139735 4678 generic.go:334] "Generic (PLEG): container finished" podID="f7a6e2b2-5559-4f5f-8602-605febf66fee" containerID="624299ddf29af0a8d61df777ed87ad9310f9bc519a941f1f0c96971ce383e47f" exitCode=0 Nov 24 12:02:41 crc kubenswrapper[4678]: I1124 12:02:41.139802 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-th6lm" event={"ID":"f7a6e2b2-5559-4f5f-8602-605febf66fee","Type":"ContainerDied","Data":"624299ddf29af0a8d61df777ed87ad9310f9bc519a941f1f0c96971ce383e47f"} Nov 24 12:02:41 crc kubenswrapper[4678]: I1124 12:02:41.590396 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:41 crc kubenswrapper[4678]: I1124 12:02:41.720004 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7a6e2b2-5559-4f5f-8602-605febf66fee-catalog-content\") pod \"f7a6e2b2-5559-4f5f-8602-605febf66fee\" (UID: \"f7a6e2b2-5559-4f5f-8602-605febf66fee\") " Nov 24 12:02:41 crc kubenswrapper[4678]: I1124 12:02:41.720215 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7a6e2b2-5559-4f5f-8602-605febf66fee-utilities\") pod \"f7a6e2b2-5559-4f5f-8602-605febf66fee\" (UID: \"f7a6e2b2-5559-4f5f-8602-605febf66fee\") " Nov 24 12:02:41 crc kubenswrapper[4678]: I1124 12:02:41.720278 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k7m4\" (UniqueName: \"kubernetes.io/projected/f7a6e2b2-5559-4f5f-8602-605febf66fee-kube-api-access-4k7m4\") pod \"f7a6e2b2-5559-4f5f-8602-605febf66fee\" (UID: \"f7a6e2b2-5559-4f5f-8602-605febf66fee\") " Nov 24 12:02:41 crc kubenswrapper[4678]: I1124 12:02:41.721280 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a6e2b2-5559-4f5f-8602-605febf66fee-utilities" (OuterVolumeSpecName: "utilities") pod "f7a6e2b2-5559-4f5f-8602-605febf66fee" (UID: "f7a6e2b2-5559-4f5f-8602-605febf66fee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:02:41 crc kubenswrapper[4678]: I1124 12:02:41.744108 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a6e2b2-5559-4f5f-8602-605febf66fee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7a6e2b2-5559-4f5f-8602-605febf66fee" (UID: "f7a6e2b2-5559-4f5f-8602-605febf66fee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:02:41 crc kubenswrapper[4678]: I1124 12:02:41.761844 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a6e2b2-5559-4f5f-8602-605febf66fee-kube-api-access-4k7m4" (OuterVolumeSpecName: "kube-api-access-4k7m4") pod "f7a6e2b2-5559-4f5f-8602-605febf66fee" (UID: "f7a6e2b2-5559-4f5f-8602-605febf66fee"). InnerVolumeSpecName "kube-api-access-4k7m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:02:41 crc kubenswrapper[4678]: I1124 12:02:41.823018 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7a6e2b2-5559-4f5f-8602-605febf66fee-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:41 crc kubenswrapper[4678]: I1124 12:02:41.823058 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7a6e2b2-5559-4f5f-8602-605febf66fee-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:41 crc kubenswrapper[4678]: I1124 12:02:41.823068 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k7m4\" (UniqueName: \"kubernetes.io/projected/f7a6e2b2-5559-4f5f-8602-605febf66fee-kube-api-access-4k7m4\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:42 crc kubenswrapper[4678]: I1124 12:02:42.159267 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-th6lm" event={"ID":"f7a6e2b2-5559-4f5f-8602-605febf66fee","Type":"ContainerDied","Data":"478584b68a024d3b9126cc7fbc0e9ccfe81b5d74b5ebdd4f3b858d65649093a2"} Nov 24 12:02:42 crc kubenswrapper[4678]: I1124 12:02:42.159358 4678 scope.go:117] "RemoveContainer" containerID="624299ddf29af0a8d61df777ed87ad9310f9bc519a941f1f0c96971ce383e47f" Nov 24 12:02:42 crc kubenswrapper[4678]: I1124 12:02:42.159564 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-th6lm" Nov 24 12:02:42 crc kubenswrapper[4678]: I1124 12:02:42.194522 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-th6lm"] Nov 24 12:02:42 crc kubenswrapper[4678]: I1124 12:02:42.195879 4678 scope.go:117] "RemoveContainer" containerID="9c3eaf70380dc03d9e3e2cebee01d50241ccd32524aa72af3122215352e1ee7b" Nov 24 12:02:42 crc kubenswrapper[4678]: I1124 12:02:42.205184 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-th6lm"] Nov 24 12:02:42 crc kubenswrapper[4678]: I1124 12:02:42.224257 4678 scope.go:117] "RemoveContainer" containerID="825b9bc3eedbcf66789b24474d9c244282276cba6ec899bade9aa3cb64ff5ba3" Nov 24 12:02:43 crc kubenswrapper[4678]: I1124 12:02:43.911475 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a6e2b2-5559-4f5f-8602-605febf66fee" path="/var/lib/kubelet/pods/f7a6e2b2-5559-4f5f-8602-605febf66fee/volumes" Nov 24 12:02:49 crc kubenswrapper[4678]: I1124 12:02:49.911331 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:49 crc kubenswrapper[4678]: I1124 12:02:49.970709 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:50 crc kubenswrapper[4678]: I1124 12:02:50.523536 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fz2vh"] Nov 24 12:02:51 crc kubenswrapper[4678]: I1124 12:02:51.252550 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fz2vh" podUID="7fb8a76b-e1df-41d7-b607-2f6014b7d25c" containerName="registry-server" containerID="cri-o://128383dd11a91b813b3cc8ac4fd94170665bfb01ede3e63f7871adb02e778fbf" gracePeriod=2 Nov 24 12:02:51 crc kubenswrapper[4678]: I1124 12:02:51.787206 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:51 crc kubenswrapper[4678]: I1124 12:02:51.888139 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-utilities\") pod \"7fb8a76b-e1df-41d7-b607-2f6014b7d25c\" (UID: \"7fb8a76b-e1df-41d7-b607-2f6014b7d25c\") " Nov 24 12:02:51 crc kubenswrapper[4678]: I1124 12:02:51.888195 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-catalog-content\") pod \"7fb8a76b-e1df-41d7-b607-2f6014b7d25c\" (UID: \"7fb8a76b-e1df-41d7-b607-2f6014b7d25c\") " Nov 24 12:02:51 crc kubenswrapper[4678]: I1124 12:02:51.888249 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zg7q\" (UniqueName: \"kubernetes.io/projected/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-kube-api-access-9zg7q\") pod \"7fb8a76b-e1df-41d7-b607-2f6014b7d25c\" (UID: \"7fb8a76b-e1df-41d7-b607-2f6014b7d25c\") " Nov 24 12:02:51 crc kubenswrapper[4678]: I1124 12:02:51.889681 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-utilities" (OuterVolumeSpecName: "utilities") pod "7fb8a76b-e1df-41d7-b607-2f6014b7d25c" (UID: "7fb8a76b-e1df-41d7-b607-2f6014b7d25c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:02:51 crc kubenswrapper[4678]: I1124 12:02:51.896972 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-kube-api-access-9zg7q" (OuterVolumeSpecName: "kube-api-access-9zg7q") pod "7fb8a76b-e1df-41d7-b607-2f6014b7d25c" (UID: "7fb8a76b-e1df-41d7-b607-2f6014b7d25c"). InnerVolumeSpecName "kube-api-access-9zg7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:02:51 crc kubenswrapper[4678]: I1124 12:02:51.984734 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fb8a76b-e1df-41d7-b607-2f6014b7d25c" (UID: "7fb8a76b-e1df-41d7-b607-2f6014b7d25c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:02:51 crc kubenswrapper[4678]: I1124 12:02:51.992232 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:51 crc kubenswrapper[4678]: I1124 12:02:51.992266 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:51 crc kubenswrapper[4678]: I1124 12:02:51.992284 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zg7q\" (UniqueName: \"kubernetes.io/projected/7fb8a76b-e1df-41d7-b607-2f6014b7d25c-kube-api-access-9zg7q\") on node \"crc\" DevicePath \"\"" Nov 24 12:02:52 crc kubenswrapper[4678]: I1124 12:02:52.268544 4678 generic.go:334] "Generic (PLEG): container finished" podID="7fb8a76b-e1df-41d7-b607-2f6014b7d25c" containerID="128383dd11a91b813b3cc8ac4fd94170665bfb01ede3e63f7871adb02e778fbf" exitCode=0 Nov 24 12:02:52 crc kubenswrapper[4678]: I1124 12:02:52.268595 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fz2vh" Nov 24 12:02:52 crc kubenswrapper[4678]: I1124 12:02:52.268621 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz2vh" event={"ID":"7fb8a76b-e1df-41d7-b607-2f6014b7d25c","Type":"ContainerDied","Data":"128383dd11a91b813b3cc8ac4fd94170665bfb01ede3e63f7871adb02e778fbf"} Nov 24 12:02:52 crc kubenswrapper[4678]: I1124 12:02:52.268788 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz2vh" event={"ID":"7fb8a76b-e1df-41d7-b607-2f6014b7d25c","Type":"ContainerDied","Data":"425639a5d39dc51a5d55c86946312656023aac25b7f47ccf6970b22f3646a5a6"} Nov 24 12:02:52 crc kubenswrapper[4678]: I1124 12:02:52.268787 4678 scope.go:117] "RemoveContainer" containerID="128383dd11a91b813b3cc8ac4fd94170665bfb01ede3e63f7871adb02e778fbf" Nov 24 12:02:52 crc kubenswrapper[4678]: I1124 12:02:52.307244 4678 scope.go:117] "RemoveContainer" containerID="c37ee3afd861d9b5fca8440f1ea996e8608219d7ecf7f51008cd56c4511e4b48" Nov 24 12:02:52 crc kubenswrapper[4678]: I1124 12:02:52.313452 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fz2vh"] Nov 24 12:02:52 crc kubenswrapper[4678]: I1124 12:02:52.327561 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fz2vh"] Nov 24 12:02:52 crc kubenswrapper[4678]: I1124 12:02:52.337759 4678 scope.go:117] "RemoveContainer" containerID="cf73c078ba932c961972c596ead50ec2ec96076f40380fb13f21ec9be60e6c88" Nov 24 12:02:52 crc kubenswrapper[4678]: I1124 12:02:52.424367 4678 scope.go:117] "RemoveContainer" containerID="128383dd11a91b813b3cc8ac4fd94170665bfb01ede3e63f7871adb02e778fbf" Nov 24 12:02:52 crc kubenswrapper[4678]: E1124 12:02:52.424934 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"128383dd11a91b813b3cc8ac4fd94170665bfb01ede3e63f7871adb02e778fbf\": container with ID starting with 128383dd11a91b813b3cc8ac4fd94170665bfb01ede3e63f7871adb02e778fbf not found: ID does not exist" containerID="128383dd11a91b813b3cc8ac4fd94170665bfb01ede3e63f7871adb02e778fbf" Nov 24 12:02:52 crc kubenswrapper[4678]: I1124 12:02:52.424976 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"128383dd11a91b813b3cc8ac4fd94170665bfb01ede3e63f7871adb02e778fbf"} err="failed to get container status \"128383dd11a91b813b3cc8ac4fd94170665bfb01ede3e63f7871adb02e778fbf\": rpc error: code = NotFound desc = could not find container \"128383dd11a91b813b3cc8ac4fd94170665bfb01ede3e63f7871adb02e778fbf\": container with ID starting with 128383dd11a91b813b3cc8ac4fd94170665bfb01ede3e63f7871adb02e778fbf not found: ID does not exist" Nov 24 12:02:52 crc kubenswrapper[4678]: I1124 12:02:52.425000 4678 scope.go:117] "RemoveContainer" containerID="c37ee3afd861d9b5fca8440f1ea996e8608219d7ecf7f51008cd56c4511e4b48" Nov 24 12:02:52 crc kubenswrapper[4678]: E1124 12:02:52.425946 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c37ee3afd861d9b5fca8440f1ea996e8608219d7ecf7f51008cd56c4511e4b48\": container with ID starting with c37ee3afd861d9b5fca8440f1ea996e8608219d7ecf7f51008cd56c4511e4b48 not found: ID does not exist" containerID="c37ee3afd861d9b5fca8440f1ea996e8608219d7ecf7f51008cd56c4511e4b48" Nov 24 12:02:52 crc kubenswrapper[4678]: I1124 12:02:52.426044 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c37ee3afd861d9b5fca8440f1ea996e8608219d7ecf7f51008cd56c4511e4b48"} err="failed to get container status \"c37ee3afd861d9b5fca8440f1ea996e8608219d7ecf7f51008cd56c4511e4b48\": rpc error: code = NotFound desc = could not find container \"c37ee3afd861d9b5fca8440f1ea996e8608219d7ecf7f51008cd56c4511e4b48\": container with ID starting with c37ee3afd861d9b5fca8440f1ea996e8608219d7ecf7f51008cd56c4511e4b48 not found: ID does not exist" Nov 24 12:02:52 crc kubenswrapper[4678]: I1124 12:02:52.426092 4678 scope.go:117] "RemoveContainer" containerID="cf73c078ba932c961972c596ead50ec2ec96076f40380fb13f21ec9be60e6c88" Nov 24 12:02:52 crc kubenswrapper[4678]: E1124 12:02:52.426451 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf73c078ba932c961972c596ead50ec2ec96076f40380fb13f21ec9be60e6c88\": container with ID starting with cf73c078ba932c961972c596ead50ec2ec96076f40380fb13f21ec9be60e6c88 not found: ID does not exist" containerID="cf73c078ba932c961972c596ead50ec2ec96076f40380fb13f21ec9be60e6c88" Nov 24 12:02:52 crc kubenswrapper[4678]: I1124 12:02:52.426487 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf73c078ba932c961972c596ead50ec2ec96076f40380fb13f21ec9be60e6c88"} err="failed to get container status \"cf73c078ba932c961972c596ead50ec2ec96076f40380fb13f21ec9be60e6c88\": rpc error: code = NotFound desc = could not find container \"cf73c078ba932c961972c596ead50ec2ec96076f40380fb13f21ec9be60e6c88\": container with ID starting with cf73c078ba932c961972c596ead50ec2ec96076f40380fb13f21ec9be60e6c88 not found: ID does not exist" Nov 24 12:02:53 crc kubenswrapper[4678]: I1124 12:02:53.914134 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fb8a76b-e1df-41d7-b607-2f6014b7d25c" path="/var/lib/kubelet/pods/7fb8a76b-e1df-41d7-b607-2f6014b7d25c/volumes" Nov 24 12:03:00 crc kubenswrapper[4678]: I1124 12:03:00.296998 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:03:00 crc kubenswrapper[4678]: I1124 12:03:00.297529 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:03:30 crc kubenswrapper[4678]: I1124 12:03:30.296706 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:03:30 crc kubenswrapper[4678]: I1124 12:03:30.298867 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:04:00 crc kubenswrapper[4678]: I1124 12:04:00.297110 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:04:00 crc kubenswrapper[4678]: I1124 12:04:00.297665 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:04:00 crc kubenswrapper[4678]: I1124 12:04:00.297748 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 12:04:00 crc kubenswrapper[4678]: I1124 12:04:00.298807 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:04:00 crc kubenswrapper[4678]: I1124 12:04:00.298882 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" gracePeriod=600 Nov 24 12:04:00 crc kubenswrapper[4678]: E1124 12:04:00.432471 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:04:01 crc kubenswrapper[4678]: I1124 12:04:01.079214 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" exitCode=0 Nov 24 12:04:01 crc kubenswrapper[4678]: I1124 12:04:01.079288 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533"} Nov 24 12:04:01 crc kubenswrapper[4678]: I1124 12:04:01.079539 4678 scope.go:117] "RemoveContainer" containerID="d60c05291373c2a59fe98401e152effd6edd15bd4a9cf084c09c97e923c9a838" Nov 24 12:04:01 crc kubenswrapper[4678]: I1124 12:04:01.080755 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:04:01 crc kubenswrapper[4678]: E1124 12:04:01.083460 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:04:04 crc kubenswrapper[4678]: I1124 12:04:04.115850 4678 generic.go:334] "Generic (PLEG): container finished" podID="8106bb6e-2abf-42db-8e44-80656738e917" containerID="19f2e8ad020379c5befe77913f88fae76cd4c3645da245047b2fc4e8d514427a" exitCode=0 Nov 24 12:04:04 crc kubenswrapper[4678]: I1124 12:04:04.115923 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" event={"ID":"8106bb6e-2abf-42db-8e44-80656738e917","Type":"ContainerDied","Data":"19f2e8ad020379c5befe77913f88fae76cd4c3645da245047b2fc4e8d514427a"} Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.638834 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.736503 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9wkl\" (UniqueName: \"kubernetes.io/projected/8106bb6e-2abf-42db-8e44-80656738e917-kube-api-access-c9wkl\") pod \"8106bb6e-2abf-42db-8e44-80656738e917\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.736659 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-2\") pod \"8106bb6e-2abf-42db-8e44-80656738e917\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.736781 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ssh-key\") pod \"8106bb6e-2abf-42db-8e44-80656738e917\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.736919 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-1\") pod \"8106bb6e-2abf-42db-8e44-80656738e917\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.737006 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-0\") pod \"8106bb6e-2abf-42db-8e44-80656738e917\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.737106 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-inventory\") pod \"8106bb6e-2abf-42db-8e44-80656738e917\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.737142 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-telemetry-combined-ca-bundle\") pod \"8106bb6e-2abf-42db-8e44-80656738e917\" (UID: \"8106bb6e-2abf-42db-8e44-80656738e917\") " Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.799045 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8106bb6e-2abf-42db-8e44-80656738e917-kube-api-access-c9wkl" (OuterVolumeSpecName: "kube-api-access-c9wkl") pod "8106bb6e-2abf-42db-8e44-80656738e917" (UID: "8106bb6e-2abf-42db-8e44-80656738e917"). InnerVolumeSpecName "kube-api-access-c9wkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.802291 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "8106bb6e-2abf-42db-8e44-80656738e917" (UID: "8106bb6e-2abf-42db-8e44-80656738e917"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.843655 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9wkl\" (UniqueName: \"kubernetes.io/projected/8106bb6e-2abf-42db-8e44-80656738e917-kube-api-access-c9wkl\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.843717 4678 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.858581 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "8106bb6e-2abf-42db-8e44-80656738e917" (UID: "8106bb6e-2abf-42db-8e44-80656738e917"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.893437 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8106bb6e-2abf-42db-8e44-80656738e917" (UID: "8106bb6e-2abf-42db-8e44-80656738e917"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.893921 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "8106bb6e-2abf-42db-8e44-80656738e917" (UID: "8106bb6e-2abf-42db-8e44-80656738e917"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.897139 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "8106bb6e-2abf-42db-8e44-80656738e917" (UID: "8106bb6e-2abf-42db-8e44-80656738e917"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.900597 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-inventory" (OuterVolumeSpecName: "inventory") pod "8106bb6e-2abf-42db-8e44-80656738e917" (UID: "8106bb6e-2abf-42db-8e44-80656738e917"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.947748 4678 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.947830 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.947848 4678 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.947865 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:05 crc kubenswrapper[4678]: I1124 12:04:05.947878 4678 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8106bb6e-2abf-42db-8e44-80656738e917-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.140037 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" event={"ID":"8106bb6e-2abf-42db-8e44-80656738e917","Type":"ContainerDied","Data":"ef7a158f8a9db234cac5c7ce0b7371dc3850fa95360a9768e4f4ce49b3c1b981"} Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.140086 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef7a158f8a9db234cac5c7ce0b7371dc3850fa95360a9768e4f4ce49b3c1b981" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.140126 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.267407 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t"] Nov 24 12:04:06 crc kubenswrapper[4678]: E1124 12:04:06.269538 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb8a76b-e1df-41d7-b607-2f6014b7d25c" containerName="extract-content" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.269564 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb8a76b-e1df-41d7-b607-2f6014b7d25c" containerName="extract-content" Nov 24 12:04:06 crc kubenswrapper[4678]: E1124 12:04:06.269577 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a6e2b2-5559-4f5f-8602-605febf66fee" containerName="extract-content" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.269585 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a6e2b2-5559-4f5f-8602-605febf66fee" containerName="extract-content" Nov 24 12:04:06 crc kubenswrapper[4678]: E1124 12:04:06.269617 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a6e2b2-5559-4f5f-8602-605febf66fee" containerName="extract-utilities" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.269627 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a6e2b2-5559-4f5f-8602-605febf66fee" containerName="extract-utilities" Nov 24 12:04:06 crc kubenswrapper[4678]: E1124 12:04:06.269650 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb8a76b-e1df-41d7-b607-2f6014b7d25c" containerName="extract-utilities" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.269658 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb8a76b-e1df-41d7-b607-2f6014b7d25c" containerName="extract-utilities" Nov 24 12:04:06 crc kubenswrapper[4678]: E1124 12:04:06.269766 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a6e2b2-5559-4f5f-8602-605febf66fee" containerName="registry-server" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.269780 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a6e2b2-5559-4f5f-8602-605febf66fee" containerName="registry-server" Nov 24 12:04:06 crc kubenswrapper[4678]: E1124 12:04:06.269870 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8106bb6e-2abf-42db-8e44-80656738e917" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.269882 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="8106bb6e-2abf-42db-8e44-80656738e917" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 12:04:06 crc kubenswrapper[4678]: E1124 12:04:06.269904 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb8a76b-e1df-41d7-b607-2f6014b7d25c" containerName="registry-server" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.269912 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb8a76b-e1df-41d7-b607-2f6014b7d25c" containerName="registry-server" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.270208 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="8106bb6e-2abf-42db-8e44-80656738e917" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.271041 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fb8a76b-e1df-41d7-b607-2f6014b7d25c" containerName="registry-server" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.271075 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a6e2b2-5559-4f5f-8602-605febf66fee" containerName="registry-server" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.272256 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.277725 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.277977 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.278346 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.278787 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.281873 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t"] Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.287321 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.358147 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.358207 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.358299 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.358337 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.358730 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.358974 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.359004 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn7f8\" (UniqueName: \"kubernetes.io/projected/178a6623-f5e9-4ead-a910-e4ca618af68c-kube-api-access-tn7f8\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.461758 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.462238 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.462358 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.462478 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.462514 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn7f8\" (UniqueName: \"kubernetes.io/projected/178a6623-f5e9-4ead-a910-e4ca618af68c-kube-api-access-tn7f8\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.462570 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.462600 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.467439 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.467580 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.467810 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.468331 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.468527 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.469025 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.483764 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn7f8\" (UniqueName: \"kubernetes.io/projected/178a6623-f5e9-4ead-a910-e4ca618af68c-kube-api-access-tn7f8\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:06 crc kubenswrapper[4678]: I1124 12:04:06.602896 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:04:07 crc kubenswrapper[4678]: I1124 12:04:07.187371 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t"] Nov 24 12:04:07 crc kubenswrapper[4678]: I1124 12:04:07.191073 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:04:08 crc kubenswrapper[4678]: I1124 12:04:08.163456 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" event={"ID":"178a6623-f5e9-4ead-a910-e4ca618af68c","Type":"ContainerStarted","Data":"df953eaa21e529aebc9c4a77ce5a2d112087dd8c401dfa811601b0866893a1bc"} Nov 24 12:04:08 crc kubenswrapper[4678]: I1124 12:04:08.163748 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" event={"ID":"178a6623-f5e9-4ead-a910-e4ca618af68c","Type":"ContainerStarted","Data":"3856accb1a5dd23faeb9da7ab103a3a70dcc3482f96c6c6a19c179fc68040d83"} Nov 24 12:04:08 crc kubenswrapper[4678]: I1124 12:04:08.214468 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" podStartSLOduration=1.797467302 podStartE2EDuration="2.214445782s" podCreationTimestamp="2025-11-24 12:04:06 +0000 UTC" firstStartedPulling="2025-11-24 12:04:07.190834787 +0000 UTC m=+2858.121894426" lastFinishedPulling="2025-11-24 12:04:07.607813267 +0000 UTC m=+2858.538872906" observedRunningTime="2025-11-24 12:04:08.211036351 +0000 UTC m=+2859.142095990" watchObservedRunningTime="2025-11-24 12:04:08.214445782 +0000 UTC m=+2859.145505421" Nov 24 12:04:13 crc kubenswrapper[4678]: I1124 12:04:13.896005 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:04:13 crc kubenswrapper[4678]: E1124 12:04:13.897029 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:04:27 crc kubenswrapper[4678]: I1124 12:04:27.896227 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:04:27 crc kubenswrapper[4678]: E1124 12:04:27.897094 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:04:38 crc kubenswrapper[4678]: I1124 12:04:38.896186 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:04:38 crc kubenswrapper[4678]: E1124 12:04:38.897272 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:04:51 crc kubenswrapper[4678]: I1124 12:04:51.895935 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:04:51 crc kubenswrapper[4678]: E1124 12:04:51.897359 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:05:06 crc kubenswrapper[4678]: I1124 12:05:06.896154 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:05:06 crc kubenswrapper[4678]: E1124 12:05:06.899841 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:05:18 crc kubenswrapper[4678]: I1124 12:05:18.896091 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:05:18 crc kubenswrapper[4678]: E1124 12:05:18.897000 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:05:29 crc kubenswrapper[4678]: I1124 12:05:29.906518 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:05:29 crc kubenswrapper[4678]: E1124 12:05:29.907642 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:05:43 crc kubenswrapper[4678]: I1124 12:05:43.897495 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:05:43 crc kubenswrapper[4678]: E1124 12:05:43.898656 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:05:58 crc kubenswrapper[4678]: I1124 12:05:58.899177 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:05:58 crc kubenswrapper[4678]: E1124 12:05:58.900352 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:06:11 crc kubenswrapper[4678]: I1124 12:06:11.895474 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:06:11 crc kubenswrapper[4678]: E1124 12:06:11.896595 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:06:13 crc kubenswrapper[4678]: I1124 12:06:13.612051 4678 generic.go:334] "Generic (PLEG): container finished" podID="178a6623-f5e9-4ead-a910-e4ca618af68c" containerID="df953eaa21e529aebc9c4a77ce5a2d112087dd8c401dfa811601b0866893a1bc" exitCode=0 Nov 24 12:06:13 crc kubenswrapper[4678]: I1124 12:06:13.612154 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" event={"ID":"178a6623-f5e9-4ead-a910-e4ca618af68c","Type":"ContainerDied","Data":"df953eaa21e529aebc9c4a77ce5a2d112087dd8c401dfa811601b0866893a1bc"} Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.115000 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.200764 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-1\") pod \"178a6623-f5e9-4ead-a910-e4ca618af68c\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.200928 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-0\") pod \"178a6623-f5e9-4ead-a910-e4ca618af68c\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.201027 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-inventory\") pod \"178a6623-f5e9-4ead-a910-e4ca618af68c\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.201160 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-telemetry-power-monitoring-combined-ca-bundle\") pod \"178a6623-f5e9-4ead-a910-e4ca618af68c\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.201275 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ssh-key\") pod \"178a6623-f5e9-4ead-a910-e4ca618af68c\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.201315 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn7f8\" (UniqueName: \"kubernetes.io/projected/178a6623-f5e9-4ead-a910-e4ca618af68c-kube-api-access-tn7f8\") pod \"178a6623-f5e9-4ead-a910-e4ca618af68c\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.201370 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-2\") pod \"178a6623-f5e9-4ead-a910-e4ca618af68c\" (UID: \"178a6623-f5e9-4ead-a910-e4ca618af68c\") " Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.207160 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "178a6623-f5e9-4ead-a910-e4ca618af68c" (UID: "178a6623-f5e9-4ead-a910-e4ca618af68c"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.207306 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/178a6623-f5e9-4ead-a910-e4ca618af68c-kube-api-access-tn7f8" (OuterVolumeSpecName: "kube-api-access-tn7f8") pod "178a6623-f5e9-4ead-a910-e4ca618af68c" (UID: "178a6623-f5e9-4ead-a910-e4ca618af68c"). InnerVolumeSpecName "kube-api-access-tn7f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.233843 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "178a6623-f5e9-4ead-a910-e4ca618af68c" (UID: "178a6623-f5e9-4ead-a910-e4ca618af68c"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.234577 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "178a6623-f5e9-4ead-a910-e4ca618af68c" (UID: "178a6623-f5e9-4ead-a910-e4ca618af68c"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.235532 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "178a6623-f5e9-4ead-a910-e4ca618af68c" (UID: "178a6623-f5e9-4ead-a910-e4ca618af68c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.241334 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "178a6623-f5e9-4ead-a910-e4ca618af68c" (UID: "178a6623-f5e9-4ead-a910-e4ca618af68c"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.261513 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-inventory" (OuterVolumeSpecName: "inventory") pod "178a6623-f5e9-4ead-a910-e4ca618af68c" (UID: "178a6623-f5e9-4ead-a910-e4ca618af68c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.305090 4678 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.305129 4678 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.305155 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.305169 4678 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.305182 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.305193 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn7f8\" (UniqueName: \"kubernetes.io/projected/178a6623-f5e9-4ead-a910-e4ca618af68c-kube-api-access-tn7f8\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.305203 4678 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/178a6623-f5e9-4ead-a910-e4ca618af68c-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.634853 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" event={"ID":"178a6623-f5e9-4ead-a910-e4ca618af68c","Type":"ContainerDied","Data":"3856accb1a5dd23faeb9da7ab103a3a70dcc3482f96c6c6a19c179fc68040d83"} Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.634903 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3856accb1a5dd23faeb9da7ab103a3a70dcc3482f96c6c6a19c179fc68040d83" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.634941 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.751135 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n"] Nov 24 12:06:15 crc kubenswrapper[4678]: E1124 12:06:15.751869 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178a6623-f5e9-4ead-a910-e4ca618af68c" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.751894 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="178a6623-f5e9-4ead-a910-e4ca618af68c" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.752255 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="178a6623-f5e9-4ead-a910-e4ca618af68c" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.753282 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.756051 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.756161 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.756229 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fkss4" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.756374 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.756420 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.763323 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n"] Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.816708 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc5df\" (UniqueName: \"kubernetes.io/projected/de1e2b8c-1820-4954-94b2-c7c021fba2ee-kube-api-access-kc5df\") pod \"logging-edpm-deployment-openstack-edpm-ipam-sfn2n\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.816760 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-sfn2n\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.816904 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-sfn2n\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.816937 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-sfn2n\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.816981 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-sfn2n\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.919311 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-sfn2n\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.919393 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-sfn2n\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.919629 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc5df\" (UniqueName: \"kubernetes.io/projected/de1e2b8c-1820-4954-94b2-c7c021fba2ee-kube-api-access-kc5df\") pod \"logging-edpm-deployment-openstack-edpm-ipam-sfn2n\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.919722 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-sfn2n\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.920326 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-sfn2n\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.924889 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-sfn2n\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.925142 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-sfn2n\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.926188 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-sfn2n\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.926473 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-sfn2n\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:15 crc kubenswrapper[4678]: I1124 12:06:15.936943 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc5df\" (UniqueName: \"kubernetes.io/projected/de1e2b8c-1820-4954-94b2-c7c021fba2ee-kube-api-access-kc5df\") pod \"logging-edpm-deployment-openstack-edpm-ipam-sfn2n\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:16 crc kubenswrapper[4678]: I1124 12:06:16.086469 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:16 crc kubenswrapper[4678]: I1124 12:06:16.661528 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n"] Nov 24 12:06:17 crc kubenswrapper[4678]: I1124 12:06:17.655814 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" event={"ID":"de1e2b8c-1820-4954-94b2-c7c021fba2ee","Type":"ContainerStarted","Data":"dd315f157765a8a30384731dc705d93cc6c86b2ee4f0fce164a256bec516be7c"} Nov 24 12:06:18 crc kubenswrapper[4678]: I1124 12:06:18.668902 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" event={"ID":"de1e2b8c-1820-4954-94b2-c7c021fba2ee","Type":"ContainerStarted","Data":"609160d21850d86296e9913e6e9936500a68b62bc0de1689033b9c1252cc606f"} Nov 24 12:06:18 crc kubenswrapper[4678]: I1124 12:06:18.696530 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" podStartSLOduration=2.727402177 podStartE2EDuration="3.696513063s" podCreationTimestamp="2025-11-24 12:06:15 +0000 UTC" firstStartedPulling="2025-11-24 12:06:16.666688555 +0000 UTC m=+2987.597748194" lastFinishedPulling="2025-11-24 12:06:17.635799441 +0000 UTC m=+2988.566859080" observedRunningTime="2025-11-24 12:06:18.685133077 +0000 UTC m=+2989.616192716" watchObservedRunningTime="2025-11-24 12:06:18.696513063 +0000 UTC m=+2989.627572702" Nov 24 12:06:26 crc kubenswrapper[4678]: I1124 12:06:26.896096 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:06:26 crc kubenswrapper[4678]: E1124 12:06:26.896947 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:06:33 crc kubenswrapper[4678]: I1124 12:06:33.935200 4678 generic.go:334] "Generic (PLEG): container finished" podID="de1e2b8c-1820-4954-94b2-c7c021fba2ee" containerID="609160d21850d86296e9913e6e9936500a68b62bc0de1689033b9c1252cc606f" exitCode=0 Nov 24 12:06:33 crc kubenswrapper[4678]: I1124 12:06:33.935263 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" event={"ID":"de1e2b8c-1820-4954-94b2-c7c021fba2ee","Type":"ContainerDied","Data":"609160d21850d86296e9913e6e9936500a68b62bc0de1689033b9c1252cc606f"} Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.423742 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.588019 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-inventory\") pod \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.588579 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-logging-compute-config-data-0\") pod \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.588741 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc5df\" (UniqueName: \"kubernetes.io/projected/de1e2b8c-1820-4954-94b2-c7c021fba2ee-kube-api-access-kc5df\") pod \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.588796 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-logging-compute-config-data-1\") pod \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.588816 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-ssh-key\") pod \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\" (UID: \"de1e2b8c-1820-4954-94b2-c7c021fba2ee\") " Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.602630 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1e2b8c-1820-4954-94b2-c7c021fba2ee-kube-api-access-kc5df" (OuterVolumeSpecName: "kube-api-access-kc5df") pod "de1e2b8c-1820-4954-94b2-c7c021fba2ee" (UID: "de1e2b8c-1820-4954-94b2-c7c021fba2ee"). InnerVolumeSpecName "kube-api-access-kc5df". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.633286 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "de1e2b8c-1820-4954-94b2-c7c021fba2ee" (UID: "de1e2b8c-1820-4954-94b2-c7c021fba2ee"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.641431 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "de1e2b8c-1820-4954-94b2-c7c021fba2ee" (UID: "de1e2b8c-1820-4954-94b2-c7c021fba2ee"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.641572 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-inventory" (OuterVolumeSpecName: "inventory") pod "de1e2b8c-1820-4954-94b2-c7c021fba2ee" (UID: "de1e2b8c-1820-4954-94b2-c7c021fba2ee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.663426 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "de1e2b8c-1820-4954-94b2-c7c021fba2ee" (UID: "de1e2b8c-1820-4954-94b2-c7c021fba2ee"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.691930 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc5df\" (UniqueName: \"kubernetes.io/projected/de1e2b8c-1820-4954-94b2-c7c021fba2ee-kube-api-access-kc5df\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.691972 4678 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.691986 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.692000 4678 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.692011 4678 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/de1e2b8c-1820-4954-94b2-c7c021fba2ee-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.960365 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" event={"ID":"de1e2b8c-1820-4954-94b2-c7c021fba2ee","Type":"ContainerDied","Data":"dd315f157765a8a30384731dc705d93cc6c86b2ee4f0fce164a256bec516be7c"} Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.960432 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd315f157765a8a30384731dc705d93cc6c86b2ee4f0fce164a256bec516be7c" Nov 24 12:06:35 crc kubenswrapper[4678]: I1124 12:06:35.960447 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-sfn2n" Nov 24 12:06:40 crc kubenswrapper[4678]: I1124 12:06:40.896839 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:06:40 crc kubenswrapper[4678]: E1124 12:06:40.898458 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:06:53 crc kubenswrapper[4678]: I1124 12:06:53.897565 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:06:53 crc kubenswrapper[4678]: E1124 12:06:53.899634 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:07:07 crc kubenswrapper[4678]: I1124 12:07:07.896406 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:07:07 crc kubenswrapper[4678]: E1124 12:07:07.897398 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:07:21 crc kubenswrapper[4678]: I1124 12:07:21.896175 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:07:21 crc kubenswrapper[4678]: E1124 12:07:21.897352 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:07:34 crc kubenswrapper[4678]: I1124 12:07:34.896240 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:07:34 crc kubenswrapper[4678]: E1124 12:07:34.897101 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:07:46 crc kubenswrapper[4678]: I1124 12:07:46.896084 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:07:46 crc kubenswrapper[4678]: E1124 12:07:46.897597 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:08:00 crc kubenswrapper[4678]: I1124 12:08:00.896244 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:08:00 crc kubenswrapper[4678]: E1124 12:08:00.897121 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:08:14 crc kubenswrapper[4678]: I1124 12:08:14.896548 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:08:14 crc kubenswrapper[4678]: E1124 12:08:14.897704 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:08:15 crc kubenswrapper[4678]: E1124 12:08:15.535816 4678 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.214:45262->38.102.83.214:39261: write tcp 38.102.83.214:45262->38.102.83.214:39261: write: broken pipe Nov 24 12:08:29 crc kubenswrapper[4678]: I1124 12:08:29.930566 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:08:29 crc kubenswrapper[4678]: E1124 12:08:29.933101 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:08:44 crc kubenswrapper[4678]: I1124 12:08:44.896565 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:08:44 crc kubenswrapper[4678]: E1124 12:08:44.897716 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:08:58 crc kubenswrapper[4678]: I1124 12:08:58.895720 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:08:58 crc kubenswrapper[4678]: E1124 12:08:58.897045 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:09:12 crc kubenswrapper[4678]: I1124 12:09:12.895964 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:09:13 crc kubenswrapper[4678]: I1124 12:09:13.979446 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"d4dd29508d8e0bdb527834c0803c6c584ca7d2f5db4eb1981ddbeb49f842bb0e"} Nov 24 12:11:30 crc kubenswrapper[4678]: I1124 12:11:30.296856 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:11:30 crc kubenswrapper[4678]: I1124 12:11:30.297568 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:11:52 crc kubenswrapper[4678]: I1124 12:11:52.218644 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ptfpk"] Nov 24 12:11:52 crc kubenswrapper[4678]: E1124 12:11:52.221709 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1e2b8c-1820-4954-94b2-c7c021fba2ee" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 24 12:11:52 crc kubenswrapper[4678]: I1124 12:11:52.221762 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1e2b8c-1820-4954-94b2-c7c021fba2ee" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 24 12:11:52 crc kubenswrapper[4678]: I1124 12:11:52.222242 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1e2b8c-1820-4954-94b2-c7c021fba2ee" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 24 12:11:52 crc kubenswrapper[4678]: I1124 12:11:52.245856 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ptfpk"] Nov 24 12:11:52 crc kubenswrapper[4678]: I1124 12:11:52.246068 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:11:52 crc kubenswrapper[4678]: I1124 12:11:52.312878 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9847b8d-b71a-44ed-a659-234a996226cf-catalog-content\") pod \"certified-operators-ptfpk\" (UID: \"e9847b8d-b71a-44ed-a659-234a996226cf\") " pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:11:52 crc kubenswrapper[4678]: I1124 12:11:52.313057 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9847b8d-b71a-44ed-a659-234a996226cf-utilities\") pod \"certified-operators-ptfpk\" (UID: \"e9847b8d-b71a-44ed-a659-234a996226cf\") " pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:11:52 crc kubenswrapper[4678]: I1124 12:11:52.313640 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6qsw\" (UniqueName: \"kubernetes.io/projected/e9847b8d-b71a-44ed-a659-234a996226cf-kube-api-access-h6qsw\") pod \"certified-operators-ptfpk\" (UID: \"e9847b8d-b71a-44ed-a659-234a996226cf\") " pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:11:52 crc kubenswrapper[4678]: I1124 12:11:52.416392 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9847b8d-b71a-44ed-a659-234a996226cf-catalog-content\") pod \"certified-operators-ptfpk\" (UID: \"e9847b8d-b71a-44ed-a659-234a996226cf\") " pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:11:52 crc kubenswrapper[4678]: I1124 12:11:52.416538 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9847b8d-b71a-44ed-a659-234a996226cf-utilities\") pod \"certified-operators-ptfpk\" (UID: \"e9847b8d-b71a-44ed-a659-234a996226cf\") " pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:11:52 crc kubenswrapper[4678]: I1124 12:11:52.416730 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6qsw\" (UniqueName: \"kubernetes.io/projected/e9847b8d-b71a-44ed-a659-234a996226cf-kube-api-access-h6qsw\") pod \"certified-operators-ptfpk\" (UID: \"e9847b8d-b71a-44ed-a659-234a996226cf\") " pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:11:52 crc kubenswrapper[4678]: I1124 12:11:52.416924 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9847b8d-b71a-44ed-a659-234a996226cf-catalog-content\") pod \"certified-operators-ptfpk\" (UID: \"e9847b8d-b71a-44ed-a659-234a996226cf\") " pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:11:52 crc kubenswrapper[4678]: I1124 12:11:52.417199 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9847b8d-b71a-44ed-a659-234a996226cf-utilities\") pod \"certified-operators-ptfpk\" (UID: \"e9847b8d-b71a-44ed-a659-234a996226cf\") " pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:11:52 crc kubenswrapper[4678]: I1124 12:11:52.439351 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6qsw\" (UniqueName: \"kubernetes.io/projected/e9847b8d-b71a-44ed-a659-234a996226cf-kube-api-access-h6qsw\") pod \"certified-operators-ptfpk\" (UID: \"e9847b8d-b71a-44ed-a659-234a996226cf\") " pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:11:52 crc kubenswrapper[4678]: I1124 12:11:52.604076 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:11:53 crc kubenswrapper[4678]: I1124 12:11:53.220246 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ptfpk"] Nov 24 12:11:54 crc kubenswrapper[4678]: I1124 12:11:54.224245 4678 generic.go:334] "Generic (PLEG): container finished" podID="e9847b8d-b71a-44ed-a659-234a996226cf" containerID="7fa63b42e4d1e042a6df7817d3e7e90410c7414646104fdd8bdacc9c7c8ed6c3" exitCode=0 Nov 24 12:11:54 crc kubenswrapper[4678]: I1124 12:11:54.224309 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptfpk" event={"ID":"e9847b8d-b71a-44ed-a659-234a996226cf","Type":"ContainerDied","Data":"7fa63b42e4d1e042a6df7817d3e7e90410c7414646104fdd8bdacc9c7c8ed6c3"} Nov 24 12:11:54 crc kubenswrapper[4678]: I1124 12:11:54.224966 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptfpk" event={"ID":"e9847b8d-b71a-44ed-a659-234a996226cf","Type":"ContainerStarted","Data":"f640b96651a617491f189a36d44314325fa26c0636da1af00da531622cd1ab0c"} Nov 24 12:11:54 crc kubenswrapper[4678]: I1124 12:11:54.227889 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:11:56 crc kubenswrapper[4678]: I1124 12:11:56.247289 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptfpk" event={"ID":"e9847b8d-b71a-44ed-a659-234a996226cf","Type":"ContainerStarted","Data":"61637273b81575f913dcdc5e988e957ce6f6ee864805459f8393d9bd67733a04"} Nov 24 12:12:00 crc kubenswrapper[4678]: I1124 12:12:00.297154 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:12:00 crc kubenswrapper[4678]: I1124 12:12:00.297794 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:12:03 crc kubenswrapper[4678]: I1124 12:12:03.333207 4678 generic.go:334] "Generic (PLEG): container finished" podID="e9847b8d-b71a-44ed-a659-234a996226cf" containerID="61637273b81575f913dcdc5e988e957ce6f6ee864805459f8393d9bd67733a04" exitCode=0 Nov 24 12:12:03 crc kubenswrapper[4678]: I1124 12:12:03.333324 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptfpk" event={"ID":"e9847b8d-b71a-44ed-a659-234a996226cf","Type":"ContainerDied","Data":"61637273b81575f913dcdc5e988e957ce6f6ee864805459f8393d9bd67733a04"} Nov 24 12:12:04 crc kubenswrapper[4678]: I1124 12:12:04.354036 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptfpk" event={"ID":"e9847b8d-b71a-44ed-a659-234a996226cf","Type":"ContainerStarted","Data":"7b64509e01bb84948ba2a857704c2b1f506ec7235eb0d437574b63d1fae9d8ef"} Nov 24 12:12:04 crc kubenswrapper[4678]: I1124 12:12:04.390430 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ptfpk" podStartSLOduration=2.6866776100000003 podStartE2EDuration="12.390407941s" podCreationTimestamp="2025-11-24 12:11:52 +0000 UTC" firstStartedPulling="2025-11-24 12:11:54.227590582 +0000 UTC m=+3325.158650221" lastFinishedPulling="2025-11-24 12:12:03.931320903 +0000 UTC m=+3334.862380552" observedRunningTime="2025-11-24 12:12:04.379145648 +0000 UTC m=+3335.310205297" watchObservedRunningTime="2025-11-24 12:12:04.390407941 +0000 UTC m=+3335.321467580" Nov 24 12:12:12 crc kubenswrapper[4678]: I1124 12:12:12.604202 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:12:12 crc kubenswrapper[4678]: I1124 12:12:12.604868 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:12:12 crc kubenswrapper[4678]: I1124 12:12:12.659845 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:12:13 crc kubenswrapper[4678]: I1124 12:12:13.554077 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:12:13 crc kubenswrapper[4678]: I1124 12:12:13.626611 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ptfpk"] Nov 24 12:12:15 crc kubenswrapper[4678]: I1124 12:12:15.522637 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ptfpk" podUID="e9847b8d-b71a-44ed-a659-234a996226cf" containerName="registry-server" containerID="cri-o://7b64509e01bb84948ba2a857704c2b1f506ec7235eb0d437574b63d1fae9d8ef" gracePeriod=2 Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.111215 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.222284 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9847b8d-b71a-44ed-a659-234a996226cf-catalog-content\") pod \"e9847b8d-b71a-44ed-a659-234a996226cf\" (UID: \"e9847b8d-b71a-44ed-a659-234a996226cf\") " Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.222821 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9847b8d-b71a-44ed-a659-234a996226cf-utilities\") pod \"e9847b8d-b71a-44ed-a659-234a996226cf\" (UID: \"e9847b8d-b71a-44ed-a659-234a996226cf\") " Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.222915 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6qsw\" (UniqueName: \"kubernetes.io/projected/e9847b8d-b71a-44ed-a659-234a996226cf-kube-api-access-h6qsw\") pod \"e9847b8d-b71a-44ed-a659-234a996226cf\" (UID: \"e9847b8d-b71a-44ed-a659-234a996226cf\") " Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.223764 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9847b8d-b71a-44ed-a659-234a996226cf-utilities" (OuterVolumeSpecName: "utilities") pod "e9847b8d-b71a-44ed-a659-234a996226cf" (UID: "e9847b8d-b71a-44ed-a659-234a996226cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.228703 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9847b8d-b71a-44ed-a659-234a996226cf-kube-api-access-h6qsw" (OuterVolumeSpecName: "kube-api-access-h6qsw") pod "e9847b8d-b71a-44ed-a659-234a996226cf" (UID: "e9847b8d-b71a-44ed-a659-234a996226cf"). InnerVolumeSpecName "kube-api-access-h6qsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.273482 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9847b8d-b71a-44ed-a659-234a996226cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9847b8d-b71a-44ed-a659-234a996226cf" (UID: "e9847b8d-b71a-44ed-a659-234a996226cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.325927 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9847b8d-b71a-44ed-a659-234a996226cf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.325969 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9847b8d-b71a-44ed-a659-234a996226cf-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.325987 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6qsw\" (UniqueName: \"kubernetes.io/projected/e9847b8d-b71a-44ed-a659-234a996226cf-kube-api-access-h6qsw\") on node \"crc\" DevicePath \"\"" Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.539762 4678 generic.go:334] "Generic (PLEG): container finished" podID="e9847b8d-b71a-44ed-a659-234a996226cf" containerID="7b64509e01bb84948ba2a857704c2b1f506ec7235eb0d437574b63d1fae9d8ef" exitCode=0 Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.539836 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptfpk" event={"ID":"e9847b8d-b71a-44ed-a659-234a996226cf","Type":"ContainerDied","Data":"7b64509e01bb84948ba2a857704c2b1f506ec7235eb0d437574b63d1fae9d8ef"} Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.539888 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptfpk" event={"ID":"e9847b8d-b71a-44ed-a659-234a996226cf","Type":"ContainerDied","Data":"f640b96651a617491f189a36d44314325fa26c0636da1af00da531622cd1ab0c"} Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.539908 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ptfpk" Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.539916 4678 scope.go:117] "RemoveContainer" containerID="7b64509e01bb84948ba2a857704c2b1f506ec7235eb0d437574b63d1fae9d8ef" Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.584827 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ptfpk"] Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.597656 4678 scope.go:117] "RemoveContainer" containerID="61637273b81575f913dcdc5e988e957ce6f6ee864805459f8393d9bd67733a04" Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.599653 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ptfpk"] Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.636931 4678 scope.go:117] "RemoveContainer" containerID="7fa63b42e4d1e042a6df7817d3e7e90410c7414646104fdd8bdacc9c7c8ed6c3" Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.700599 4678 scope.go:117] "RemoveContainer" containerID="7b64509e01bb84948ba2a857704c2b1f506ec7235eb0d437574b63d1fae9d8ef" Nov 24 12:12:16 crc kubenswrapper[4678]: E1124 12:12:16.701291 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b64509e01bb84948ba2a857704c2b1f506ec7235eb0d437574b63d1fae9d8ef\": container with ID starting with 7b64509e01bb84948ba2a857704c2b1f506ec7235eb0d437574b63d1fae9d8ef not found: ID does not exist" containerID="7b64509e01bb84948ba2a857704c2b1f506ec7235eb0d437574b63d1fae9d8ef" Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.701332 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b64509e01bb84948ba2a857704c2b1f506ec7235eb0d437574b63d1fae9d8ef"} err="failed to get container status \"7b64509e01bb84948ba2a857704c2b1f506ec7235eb0d437574b63d1fae9d8ef\": rpc error: code = NotFound desc = could not find container \"7b64509e01bb84948ba2a857704c2b1f506ec7235eb0d437574b63d1fae9d8ef\": container with ID starting with 7b64509e01bb84948ba2a857704c2b1f506ec7235eb0d437574b63d1fae9d8ef not found: ID does not exist" Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.701370 4678 scope.go:117] "RemoveContainer" containerID="61637273b81575f913dcdc5e988e957ce6f6ee864805459f8393d9bd67733a04" Nov 24 12:12:16 crc kubenswrapper[4678]: E1124 12:12:16.701878 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61637273b81575f913dcdc5e988e957ce6f6ee864805459f8393d9bd67733a04\": container with ID starting with 61637273b81575f913dcdc5e988e957ce6f6ee864805459f8393d9bd67733a04 not found: ID does not exist" containerID="61637273b81575f913dcdc5e988e957ce6f6ee864805459f8393d9bd67733a04" Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.701899 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61637273b81575f913dcdc5e988e957ce6f6ee864805459f8393d9bd67733a04"} err="failed to get container status \"61637273b81575f913dcdc5e988e957ce6f6ee864805459f8393d9bd67733a04\": rpc error: code = NotFound desc = could not find container \"61637273b81575f913dcdc5e988e957ce6f6ee864805459f8393d9bd67733a04\": container with ID starting with 61637273b81575f913dcdc5e988e957ce6f6ee864805459f8393d9bd67733a04 not found: ID does not exist" Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.701916 4678 scope.go:117] "RemoveContainer" containerID="7fa63b42e4d1e042a6df7817d3e7e90410c7414646104fdd8bdacc9c7c8ed6c3" Nov 24 12:12:16 crc kubenswrapper[4678]: E1124 12:12:16.702200 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fa63b42e4d1e042a6df7817d3e7e90410c7414646104fdd8bdacc9c7c8ed6c3\": container with ID starting with 7fa63b42e4d1e042a6df7817d3e7e90410c7414646104fdd8bdacc9c7c8ed6c3 not found: ID does not exist" containerID="7fa63b42e4d1e042a6df7817d3e7e90410c7414646104fdd8bdacc9c7c8ed6c3" Nov 24 12:12:16 crc kubenswrapper[4678]: I1124 12:12:16.702234 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa63b42e4d1e042a6df7817d3e7e90410c7414646104fdd8bdacc9c7c8ed6c3"} err="failed to get container status \"7fa63b42e4d1e042a6df7817d3e7e90410c7414646104fdd8bdacc9c7c8ed6c3\": rpc error: code = NotFound desc = could not find container \"7fa63b42e4d1e042a6df7817d3e7e90410c7414646104fdd8bdacc9c7c8ed6c3\": container with ID starting with 7fa63b42e4d1e042a6df7817d3e7e90410c7414646104fdd8bdacc9c7c8ed6c3 not found: ID does not exist" Nov 24 12:12:17 crc kubenswrapper[4678]: I1124 12:12:17.914422 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9847b8d-b71a-44ed-a659-234a996226cf" path="/var/lib/kubelet/pods/e9847b8d-b71a-44ed-a659-234a996226cf/volumes" Nov 24 12:12:30 crc kubenswrapper[4678]: I1124 12:12:30.296478 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:12:30 crc kubenswrapper[4678]: I1124 12:12:30.297075 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:12:30 crc kubenswrapper[4678]: I1124 12:12:30.297124 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 12:12:30 crc kubenswrapper[4678]: I1124 12:12:30.298189 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4dd29508d8e0bdb527834c0803c6c584ca7d2f5db4eb1981ddbeb49f842bb0e"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:12:30 crc kubenswrapper[4678]: I1124 12:12:30.298257 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://d4dd29508d8e0bdb527834c0803c6c584ca7d2f5db4eb1981ddbeb49f842bb0e" gracePeriod=600 Nov 24 12:12:30 crc kubenswrapper[4678]: I1124 12:12:30.713534 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="d4dd29508d8e0bdb527834c0803c6c584ca7d2f5db4eb1981ddbeb49f842bb0e" exitCode=0 Nov 24 12:12:30 crc kubenswrapper[4678]: I1124 12:12:30.713581 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"d4dd29508d8e0bdb527834c0803c6c584ca7d2f5db4eb1981ddbeb49f842bb0e"} Nov 24 12:12:30 crc kubenswrapper[4678]: I1124 12:12:30.713900 4678 scope.go:117] "RemoveContainer" containerID="150fb52301804752fa87e5510250872188e733db991161b5ec4f334e28d1e533" Nov 24 12:12:31 crc kubenswrapper[4678]: I1124 12:12:31.735128 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614"} Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.423103 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t2vks"] Nov 24 12:12:46 crc kubenswrapper[4678]: E1124 12:12:46.424200 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9847b8d-b71a-44ed-a659-234a996226cf" containerName="extract-utilities" Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.424225 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9847b8d-b71a-44ed-a659-234a996226cf" containerName="extract-utilities" Nov 24 12:12:46 crc kubenswrapper[4678]: E1124 12:12:46.424265 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9847b8d-b71a-44ed-a659-234a996226cf" containerName="extract-content" Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.424271 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9847b8d-b71a-44ed-a659-234a996226cf" containerName="extract-content" Nov 24 12:12:46 crc kubenswrapper[4678]: E1124 12:12:46.424293 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9847b8d-b71a-44ed-a659-234a996226cf" containerName="registry-server" Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.424299 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9847b8d-b71a-44ed-a659-234a996226cf" containerName="registry-server" Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.424583 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9847b8d-b71a-44ed-a659-234a996226cf" containerName="registry-server" Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.426521 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.441055 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2vks"] Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.475510 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmqks\" (UniqueName: \"kubernetes.io/projected/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-kube-api-access-pmqks\") pod \"redhat-marketplace-t2vks\" (UID: \"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c\") " pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.476082 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-utilities\") pod \"redhat-marketplace-t2vks\" (UID: \"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c\") " pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.476461 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-catalog-content\") pod \"redhat-marketplace-t2vks\" (UID: \"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c\") " pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.580314 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-utilities\") pod \"redhat-marketplace-t2vks\" (UID: \"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c\") " pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.580422 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-catalog-content\") pod \"redhat-marketplace-t2vks\" (UID: \"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c\") " pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.580629 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmqks\" (UniqueName: \"kubernetes.io/projected/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-kube-api-access-pmqks\") pod \"redhat-marketplace-t2vks\" (UID: \"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c\") " pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.581027 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-utilities\") pod \"redhat-marketplace-t2vks\" (UID: \"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c\") " pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.581074 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-catalog-content\") pod \"redhat-marketplace-t2vks\" (UID: \"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c\") " pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.606493 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmqks\" (UniqueName: \"kubernetes.io/projected/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-kube-api-access-pmqks\") pod \"redhat-marketplace-t2vks\" (UID: \"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c\") " pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:12:46 crc kubenswrapper[4678]: I1124 12:12:46.754605 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:12:47 crc kubenswrapper[4678]: I1124 12:12:47.319865 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2vks"] Nov 24 12:12:47 crc kubenswrapper[4678]: I1124 12:12:47.960418 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2vks" event={"ID":"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c","Type":"ContainerStarted","Data":"ebcc2fd4920241e5707a97d72113599c0d399d80a2b33d1cb7ebf40096f15c3e"} Nov 24 12:12:49 crc kubenswrapper[4678]: I1124 12:12:49.998134 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2vks" event={"ID":"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c","Type":"ContainerStarted","Data":"e3959fddc927d7b318756f945959e55ba068583ac2d445ba17a542194d8eecb1"} Nov 24 12:12:50 crc kubenswrapper[4678]: I1124 12:12:50.795393 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tm45p"] Nov 24 12:12:50 crc kubenswrapper[4678]: I1124 12:12:50.799042 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:12:50 crc kubenswrapper[4678]: I1124 12:12:50.808718 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tm45p"] Nov 24 12:12:50 crc kubenswrapper[4678]: I1124 12:12:50.910037 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b869d75-f280-402a-90b0-bce57592f120-catalog-content\") pod \"community-operators-tm45p\" (UID: \"4b869d75-f280-402a-90b0-bce57592f120\") " pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:12:50 crc kubenswrapper[4678]: I1124 12:12:50.910932 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b869d75-f280-402a-90b0-bce57592f120-utilities\") pod \"community-operators-tm45p\" (UID: \"4b869d75-f280-402a-90b0-bce57592f120\") " pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:12:50 crc kubenswrapper[4678]: I1124 12:12:50.911173 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8gm4\" (UniqueName: \"kubernetes.io/projected/4b869d75-f280-402a-90b0-bce57592f120-kube-api-access-z8gm4\") pod \"community-operators-tm45p\" (UID: \"4b869d75-f280-402a-90b0-bce57592f120\") " pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:12:51 crc kubenswrapper[4678]: I1124 12:12:51.011116 4678 generic.go:334] "Generic (PLEG): container finished" podID="ddc5a90a-ae3c-4f69-9c37-5901323b1c8c" containerID="e3959fddc927d7b318756f945959e55ba068583ac2d445ba17a542194d8eecb1" exitCode=0 Nov 24 12:12:51 crc kubenswrapper[4678]: I1124 12:12:51.011170 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2vks" event={"ID":"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c","Type":"ContainerDied","Data":"e3959fddc927d7b318756f945959e55ba068583ac2d445ba17a542194d8eecb1"} Nov 24 12:12:51 crc kubenswrapper[4678]: I1124 12:12:51.012917 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b869d75-f280-402a-90b0-bce57592f120-catalog-content\") pod \"community-operators-tm45p\" (UID: \"4b869d75-f280-402a-90b0-bce57592f120\") " pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:12:51 crc kubenswrapper[4678]: I1124 12:12:51.013014 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b869d75-f280-402a-90b0-bce57592f120-utilities\") pod \"community-operators-tm45p\" (UID: \"4b869d75-f280-402a-90b0-bce57592f120\") " pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:12:51 crc kubenswrapper[4678]: I1124 12:12:51.013171 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8gm4\" (UniqueName: \"kubernetes.io/projected/4b869d75-f280-402a-90b0-bce57592f120-kube-api-access-z8gm4\") pod \"community-operators-tm45p\" (UID: \"4b869d75-f280-402a-90b0-bce57592f120\") " pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:12:51 crc kubenswrapper[4678]: I1124 12:12:51.013693 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b869d75-f280-402a-90b0-bce57592f120-catalog-content\") pod \"community-operators-tm45p\" (UID: \"4b869d75-f280-402a-90b0-bce57592f120\") " pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:12:51 crc kubenswrapper[4678]: I1124 12:12:51.014895 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b869d75-f280-402a-90b0-bce57592f120-utilities\") pod \"community-operators-tm45p\" (UID: \"4b869d75-f280-402a-90b0-bce57592f120\") " pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:12:51 crc kubenswrapper[4678]: I1124 12:12:51.042188 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8gm4\" (UniqueName: \"kubernetes.io/projected/4b869d75-f280-402a-90b0-bce57592f120-kube-api-access-z8gm4\") pod \"community-operators-tm45p\" (UID: \"4b869d75-f280-402a-90b0-bce57592f120\") " pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:12:51 crc kubenswrapper[4678]: I1124 12:12:51.133494 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:12:51 crc kubenswrapper[4678]: I1124 12:12:51.765464 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tm45p"] Nov 24 12:12:51 crc kubenswrapper[4678]: W1124 12:12:51.770980 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b869d75_f280_402a_90b0_bce57592f120.slice/crio-b9f9e69816e900edcb2c45a4133ba0a35f82ca016e51e816f38febf6f91fcf10 WatchSource:0}: Error finding container b9f9e69816e900edcb2c45a4133ba0a35f82ca016e51e816f38febf6f91fcf10: Status 404 returned error can't find the container with id b9f9e69816e900edcb2c45a4133ba0a35f82ca016e51e816f38febf6f91fcf10 Nov 24 12:12:52 crc kubenswrapper[4678]: I1124 12:12:52.027405 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm45p" event={"ID":"4b869d75-f280-402a-90b0-bce57592f120","Type":"ContainerStarted","Data":"b9f9e69816e900edcb2c45a4133ba0a35f82ca016e51e816f38febf6f91fcf10"} Nov 24 12:12:53 crc kubenswrapper[4678]: I1124 12:12:53.041904 4678 generic.go:334] "Generic (PLEG): container finished" podID="4b869d75-f280-402a-90b0-bce57592f120" containerID="198aff5a7e5ff7c9d2edb895f95bfccd5183a0d4e0491e54a18635439b7f1ab4" exitCode=0 Nov 24 12:12:53 crc kubenswrapper[4678]: I1124 12:12:53.041997 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm45p" event={"ID":"4b869d75-f280-402a-90b0-bce57592f120","Type":"ContainerDied","Data":"198aff5a7e5ff7c9d2edb895f95bfccd5183a0d4e0491e54a18635439b7f1ab4"} Nov 24 12:12:55 crc kubenswrapper[4678]: I1124 12:12:55.074875 4678 generic.go:334] "Generic (PLEG): container finished" podID="ddc5a90a-ae3c-4f69-9c37-5901323b1c8c" containerID="55d6b7f9a653d053128544935675fa8751b8eb852d46485c7773026cd4ab651c" exitCode=0 Nov 24 12:12:55 crc kubenswrapper[4678]: I1124 12:12:55.075807 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2vks" event={"ID":"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c","Type":"ContainerDied","Data":"55d6b7f9a653d053128544935675fa8751b8eb852d46485c7773026cd4ab651c"} Nov 24 12:12:57 crc kubenswrapper[4678]: I1124 12:12:57.105352 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm45p" event={"ID":"4b869d75-f280-402a-90b0-bce57592f120","Type":"ContainerStarted","Data":"5af954d04153acecf8a18b9e33d4b5ff326ee6e4d42d26cb7348130687fae3b7"} Nov 24 12:12:58 crc kubenswrapper[4678]: I1124 12:12:58.124900 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2vks" event={"ID":"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c","Type":"ContainerStarted","Data":"1b24daadde8294265a7d1a95e405a384f6809225c4430ead406135e835d181e2"} Nov 24 12:12:58 crc kubenswrapper[4678]: I1124 12:12:58.157246 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t2vks" podStartSLOduration=6.691852301 podStartE2EDuration="12.157218042s" podCreationTimestamp="2025-11-24 12:12:46 +0000 UTC" firstStartedPulling="2025-11-24 12:12:51.015250464 +0000 UTC m=+3381.946310103" lastFinishedPulling="2025-11-24 12:12:56.480616195 +0000 UTC m=+3387.411675844" observedRunningTime="2025-11-24 12:12:58.144951252 +0000 UTC m=+3389.076010891" watchObservedRunningTime="2025-11-24 12:12:58.157218042 +0000 UTC m=+3389.088277671" Nov 24 12:13:01 crc kubenswrapper[4678]: I1124 12:13:01.164604 4678 generic.go:334] "Generic (PLEG): container finished" podID="4b869d75-f280-402a-90b0-bce57592f120" containerID="5af954d04153acecf8a18b9e33d4b5ff326ee6e4d42d26cb7348130687fae3b7" exitCode=0 Nov 24 12:13:01 crc kubenswrapper[4678]: I1124 12:13:01.165170 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm45p" event={"ID":"4b869d75-f280-402a-90b0-bce57592f120","Type":"ContainerDied","Data":"5af954d04153acecf8a18b9e33d4b5ff326ee6e4d42d26cb7348130687fae3b7"} Nov 24 12:13:03 crc kubenswrapper[4678]: I1124 12:13:03.198785 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm45p" event={"ID":"4b869d75-f280-402a-90b0-bce57592f120","Type":"ContainerStarted","Data":"1bedf875d9e75138425ee9e3c7c56e791cefb102679bd72b761d9afb5d91aa02"} Nov 24 12:13:03 crc kubenswrapper[4678]: I1124 12:13:03.226210 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tm45p" podStartSLOduration=3.829517782 podStartE2EDuration="13.226178403s" podCreationTimestamp="2025-11-24 12:12:50 +0000 UTC" firstStartedPulling="2025-11-24 12:12:53.093372749 +0000 UTC m=+3384.024432388" lastFinishedPulling="2025-11-24 12:13:02.49003337 +0000 UTC m=+3393.421093009" observedRunningTime="2025-11-24 12:13:03.219664457 +0000 UTC m=+3394.150724096" watchObservedRunningTime="2025-11-24 12:13:03.226178403 +0000 UTC m=+3394.157238062" Nov 24 12:13:06 crc kubenswrapper[4678]: I1124 12:13:06.755685 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:13:06 crc kubenswrapper[4678]: I1124 12:13:06.756945 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:13:06 crc kubenswrapper[4678]: I1124 12:13:06.817584 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:13:07 crc kubenswrapper[4678]: I1124 12:13:07.300051 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:13:07 crc kubenswrapper[4678]: I1124 12:13:07.358382 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2vks"] Nov 24 12:13:09 crc kubenswrapper[4678]: I1124 12:13:09.291839 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t2vks" podUID="ddc5a90a-ae3c-4f69-9c37-5901323b1c8c" containerName="registry-server" containerID="cri-o://1b24daadde8294265a7d1a95e405a384f6809225c4430ead406135e835d181e2" gracePeriod=2 Nov 24 12:13:10 crc kubenswrapper[4678]: I1124 12:13:10.304614 4678 generic.go:334] "Generic (PLEG): container finished" podID="ddc5a90a-ae3c-4f69-9c37-5901323b1c8c" containerID="1b24daadde8294265a7d1a95e405a384f6809225c4430ead406135e835d181e2" exitCode=0 Nov 24 12:13:10 crc kubenswrapper[4678]: I1124 12:13:10.304712 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2vks" event={"ID":"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c","Type":"ContainerDied","Data":"1b24daadde8294265a7d1a95e405a384f6809225c4430ead406135e835d181e2"} Nov 24 12:13:10 crc kubenswrapper[4678]: I1124 12:13:10.493532 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:13:10 crc kubenswrapper[4678]: I1124 12:13:10.542698 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmqks\" (UniqueName: \"kubernetes.io/projected/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-kube-api-access-pmqks\") pod \"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c\" (UID: \"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c\") " Nov 24 12:13:10 crc kubenswrapper[4678]: I1124 12:13:10.543072 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-catalog-content\") pod \"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c\" (UID: \"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c\") " Nov 24 12:13:10 crc kubenswrapper[4678]: I1124 12:13:10.543174 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-utilities\") pod \"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c\" (UID: \"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c\") " Nov 24 12:13:10 crc kubenswrapper[4678]: I1124 12:13:10.544602 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-utilities" (OuterVolumeSpecName: "utilities") pod "ddc5a90a-ae3c-4f69-9c37-5901323b1c8c" (UID: "ddc5a90a-ae3c-4f69-9c37-5901323b1c8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:13:10 crc kubenswrapper[4678]: I1124 12:13:10.549925 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-kube-api-access-pmqks" (OuterVolumeSpecName: "kube-api-access-pmqks") pod "ddc5a90a-ae3c-4f69-9c37-5901323b1c8c" (UID: "ddc5a90a-ae3c-4f69-9c37-5901323b1c8c"). InnerVolumeSpecName "kube-api-access-pmqks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:13:10 crc kubenswrapper[4678]: I1124 12:13:10.561770 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddc5a90a-ae3c-4f69-9c37-5901323b1c8c" (UID: "ddc5a90a-ae3c-4f69-9c37-5901323b1c8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:13:10 crc kubenswrapper[4678]: I1124 12:13:10.647310 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:10 crc kubenswrapper[4678]: I1124 12:13:10.647378 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:10 crc kubenswrapper[4678]: I1124 12:13:10.647393 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmqks\" (UniqueName: \"kubernetes.io/projected/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c-kube-api-access-pmqks\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:11 crc kubenswrapper[4678]: I1124 12:13:11.135070 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:13:11 crc kubenswrapper[4678]: I1124 12:13:11.135137 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:13:11 crc kubenswrapper[4678]: I1124 12:13:11.204303 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:13:11 crc kubenswrapper[4678]: I1124 12:13:11.321970 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2vks" event={"ID":"ddc5a90a-ae3c-4f69-9c37-5901323b1c8c","Type":"ContainerDied","Data":"ebcc2fd4920241e5707a97d72113599c0d399d80a2b33d1cb7ebf40096f15c3e"} Nov 24 12:13:11 crc kubenswrapper[4678]: I1124 12:13:11.322030 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t2vks" Nov 24 12:13:11 crc kubenswrapper[4678]: I1124 12:13:11.322060 4678 scope.go:117] "RemoveContainer" containerID="1b24daadde8294265a7d1a95e405a384f6809225c4430ead406135e835d181e2" Nov 24 12:13:11 crc kubenswrapper[4678]: I1124 12:13:11.364345 4678 scope.go:117] "RemoveContainer" containerID="55d6b7f9a653d053128544935675fa8751b8eb852d46485c7773026cd4ab651c" Nov 24 12:13:11 crc kubenswrapper[4678]: I1124 12:13:11.369067 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2vks"] Nov 24 12:13:11 crc kubenswrapper[4678]: I1124 12:13:11.385307 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2vks"] Nov 24 12:13:11 crc kubenswrapper[4678]: I1124 12:13:11.396175 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:13:11 crc kubenswrapper[4678]: I1124 12:13:11.404628 4678 scope.go:117] "RemoveContainer" containerID="e3959fddc927d7b318756f945959e55ba068583ac2d445ba17a542194d8eecb1" Nov 24 12:13:11 crc kubenswrapper[4678]: I1124 12:13:11.913706 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddc5a90a-ae3c-4f69-9c37-5901323b1c8c" path="/var/lib/kubelet/pods/ddc5a90a-ae3c-4f69-9c37-5901323b1c8c/volumes" Nov 24 12:13:12 crc kubenswrapper[4678]: I1124 12:13:12.734165 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tm45p"] Nov 24 12:13:13 crc kubenswrapper[4678]: I1124 12:13:13.355312 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tm45p" podUID="4b869d75-f280-402a-90b0-bce57592f120" containerName="registry-server" containerID="cri-o://1bedf875d9e75138425ee9e3c7c56e791cefb102679bd72b761d9afb5d91aa02" gracePeriod=2 Nov 24 12:13:13 crc kubenswrapper[4678]: I1124 12:13:13.949548 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.043260 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b869d75-f280-402a-90b0-bce57592f120-utilities\") pod \"4b869d75-f280-402a-90b0-bce57592f120\" (UID: \"4b869d75-f280-402a-90b0-bce57592f120\") " Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.043482 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b869d75-f280-402a-90b0-bce57592f120-catalog-content\") pod \"4b869d75-f280-402a-90b0-bce57592f120\" (UID: \"4b869d75-f280-402a-90b0-bce57592f120\") " Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.043632 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8gm4\" (UniqueName: \"kubernetes.io/projected/4b869d75-f280-402a-90b0-bce57592f120-kube-api-access-z8gm4\") pod \"4b869d75-f280-402a-90b0-bce57592f120\" (UID: \"4b869d75-f280-402a-90b0-bce57592f120\") " Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.044475 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b869d75-f280-402a-90b0-bce57592f120-utilities" (OuterVolumeSpecName: "utilities") pod "4b869d75-f280-402a-90b0-bce57592f120" (UID: "4b869d75-f280-402a-90b0-bce57592f120"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.054386 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b869d75-f280-402a-90b0-bce57592f120-kube-api-access-z8gm4" (OuterVolumeSpecName: "kube-api-access-z8gm4") pod "4b869d75-f280-402a-90b0-bce57592f120" (UID: "4b869d75-f280-402a-90b0-bce57592f120"). InnerVolumeSpecName "kube-api-access-z8gm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.112439 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b869d75-f280-402a-90b0-bce57592f120-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b869d75-f280-402a-90b0-bce57592f120" (UID: "4b869d75-f280-402a-90b0-bce57592f120"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.146446 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b869d75-f280-402a-90b0-bce57592f120-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.146492 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8gm4\" (UniqueName: \"kubernetes.io/projected/4b869d75-f280-402a-90b0-bce57592f120-kube-api-access-z8gm4\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.146506 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b869d75-f280-402a-90b0-bce57592f120-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.372513 4678 generic.go:334] "Generic (PLEG): container finished" podID="4b869d75-f280-402a-90b0-bce57592f120" containerID="1bedf875d9e75138425ee9e3c7c56e791cefb102679bd72b761d9afb5d91aa02" exitCode=0 Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.372680 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tm45p" Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.372662 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm45p" event={"ID":"4b869d75-f280-402a-90b0-bce57592f120","Type":"ContainerDied","Data":"1bedf875d9e75138425ee9e3c7c56e791cefb102679bd72b761d9afb5d91aa02"} Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.373069 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm45p" event={"ID":"4b869d75-f280-402a-90b0-bce57592f120","Type":"ContainerDied","Data":"b9f9e69816e900edcb2c45a4133ba0a35f82ca016e51e816f38febf6f91fcf10"} Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.373097 4678 scope.go:117] "RemoveContainer" containerID="1bedf875d9e75138425ee9e3c7c56e791cefb102679bd72b761d9afb5d91aa02" Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.414767 4678 scope.go:117] "RemoveContainer" containerID="5af954d04153acecf8a18b9e33d4b5ff326ee6e4d42d26cb7348130687fae3b7" Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.421492 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tm45p"] Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.434505 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tm45p"] Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.452345 4678 scope.go:117] "RemoveContainer" containerID="198aff5a7e5ff7c9d2edb895f95bfccd5183a0d4e0491e54a18635439b7f1ab4" Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.505598 4678 scope.go:117] "RemoveContainer" containerID="1bedf875d9e75138425ee9e3c7c56e791cefb102679bd72b761d9afb5d91aa02" Nov 24 12:13:14 crc kubenswrapper[4678]: E1124 12:13:14.506190 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bedf875d9e75138425ee9e3c7c56e791cefb102679bd72b761d9afb5d91aa02\": container with ID starting with 1bedf875d9e75138425ee9e3c7c56e791cefb102679bd72b761d9afb5d91aa02 not found: ID does not exist" containerID="1bedf875d9e75138425ee9e3c7c56e791cefb102679bd72b761d9afb5d91aa02" Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.506380 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bedf875d9e75138425ee9e3c7c56e791cefb102679bd72b761d9afb5d91aa02"} err="failed to get container status \"1bedf875d9e75138425ee9e3c7c56e791cefb102679bd72b761d9afb5d91aa02\": rpc error: code = NotFound desc = could not find container \"1bedf875d9e75138425ee9e3c7c56e791cefb102679bd72b761d9afb5d91aa02\": container with ID starting with 1bedf875d9e75138425ee9e3c7c56e791cefb102679bd72b761d9afb5d91aa02 not found: ID does not exist" Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.506555 4678 scope.go:117] "RemoveContainer" containerID="5af954d04153acecf8a18b9e33d4b5ff326ee6e4d42d26cb7348130687fae3b7" Nov 24 12:13:14 crc kubenswrapper[4678]: E1124 12:13:14.507079 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5af954d04153acecf8a18b9e33d4b5ff326ee6e4d42d26cb7348130687fae3b7\": container with ID starting with 5af954d04153acecf8a18b9e33d4b5ff326ee6e4d42d26cb7348130687fae3b7 not found: ID does not exist" containerID="5af954d04153acecf8a18b9e33d4b5ff326ee6e4d42d26cb7348130687fae3b7" Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.507203 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5af954d04153acecf8a18b9e33d4b5ff326ee6e4d42d26cb7348130687fae3b7"} err="failed to get container status \"5af954d04153acecf8a18b9e33d4b5ff326ee6e4d42d26cb7348130687fae3b7\": rpc error: code = NotFound desc = could not find container \"5af954d04153acecf8a18b9e33d4b5ff326ee6e4d42d26cb7348130687fae3b7\": container with ID starting with 5af954d04153acecf8a18b9e33d4b5ff326ee6e4d42d26cb7348130687fae3b7 not found: ID does not exist" Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.507307 4678 scope.go:117] "RemoveContainer" containerID="198aff5a7e5ff7c9d2edb895f95bfccd5183a0d4e0491e54a18635439b7f1ab4" Nov 24 12:13:14 crc kubenswrapper[4678]: E1124 12:13:14.507892 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"198aff5a7e5ff7c9d2edb895f95bfccd5183a0d4e0491e54a18635439b7f1ab4\": container with ID starting with 198aff5a7e5ff7c9d2edb895f95bfccd5183a0d4e0491e54a18635439b7f1ab4 not found: ID does not exist" containerID="198aff5a7e5ff7c9d2edb895f95bfccd5183a0d4e0491e54a18635439b7f1ab4" Nov 24 12:13:14 crc kubenswrapper[4678]: I1124 12:13:14.507969 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"198aff5a7e5ff7c9d2edb895f95bfccd5183a0d4e0491e54a18635439b7f1ab4"} err="failed to get container status \"198aff5a7e5ff7c9d2edb895f95bfccd5183a0d4e0491e54a18635439b7f1ab4\": rpc error: code = NotFound desc = could not find container \"198aff5a7e5ff7c9d2edb895f95bfccd5183a0d4e0491e54a18635439b7f1ab4\": container with ID starting with 198aff5a7e5ff7c9d2edb895f95bfccd5183a0d4e0491e54a18635439b7f1ab4 not found: ID does not exist" Nov 24 12:13:15 crc kubenswrapper[4678]: I1124 12:13:15.912594 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b869d75-f280-402a-90b0-bce57592f120" path="/var/lib/kubelet/pods/4b869d75-f280-402a-90b0-bce57592f120/volumes" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.614216 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xvd8x"] Nov 24 12:13:34 crc kubenswrapper[4678]: E1124 12:13:34.616472 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc5a90a-ae3c-4f69-9c37-5901323b1c8c" containerName="extract-utilities" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.616501 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc5a90a-ae3c-4f69-9c37-5901323b1c8c" containerName="extract-utilities" Nov 24 12:13:34 crc kubenswrapper[4678]: E1124 12:13:34.616523 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc5a90a-ae3c-4f69-9c37-5901323b1c8c" containerName="extract-content" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.616535 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc5a90a-ae3c-4f69-9c37-5901323b1c8c" containerName="extract-content" Nov 24 12:13:34 crc kubenswrapper[4678]: E1124 12:13:34.616557 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b869d75-f280-402a-90b0-bce57592f120" containerName="registry-server" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.616566 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b869d75-f280-402a-90b0-bce57592f120" containerName="registry-server" Nov 24 12:13:34 crc kubenswrapper[4678]: E1124 12:13:34.616597 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc5a90a-ae3c-4f69-9c37-5901323b1c8c" containerName="registry-server" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.616606 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc5a90a-ae3c-4f69-9c37-5901323b1c8c" containerName="registry-server" Nov 24 12:13:34 crc kubenswrapper[4678]: E1124 12:13:34.616636 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b869d75-f280-402a-90b0-bce57592f120" containerName="extract-utilities" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.616644 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b869d75-f280-402a-90b0-bce57592f120" containerName="extract-utilities" Nov 24 12:13:34 crc kubenswrapper[4678]: E1124 12:13:34.616660 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b869d75-f280-402a-90b0-bce57592f120" containerName="extract-content" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.616670 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b869d75-f280-402a-90b0-bce57592f120" containerName="extract-content" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.617007 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddc5a90a-ae3c-4f69-9c37-5901323b1c8c" containerName="registry-server" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.617037 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b869d75-f280-402a-90b0-bce57592f120" containerName="registry-server" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.619503 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.635028 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvd8x"] Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.716827 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtmq8\" (UniqueName: \"kubernetes.io/projected/276d6e96-87af-451e-80a6-0267847f5760-kube-api-access-wtmq8\") pod \"redhat-operators-xvd8x\" (UID: \"276d6e96-87af-451e-80a6-0267847f5760\") " pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.716906 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/276d6e96-87af-451e-80a6-0267847f5760-utilities\") pod \"redhat-operators-xvd8x\" (UID: \"276d6e96-87af-451e-80a6-0267847f5760\") " pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.717455 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/276d6e96-87af-451e-80a6-0267847f5760-catalog-content\") pod \"redhat-operators-xvd8x\" (UID: \"276d6e96-87af-451e-80a6-0267847f5760\") " pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.821568 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/276d6e96-87af-451e-80a6-0267847f5760-catalog-content\") pod \"redhat-operators-xvd8x\" (UID: \"276d6e96-87af-451e-80a6-0267847f5760\") " pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.821916 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtmq8\" (UniqueName: \"kubernetes.io/projected/276d6e96-87af-451e-80a6-0267847f5760-kube-api-access-wtmq8\") pod \"redhat-operators-xvd8x\" (UID: \"276d6e96-87af-451e-80a6-0267847f5760\") " pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.822015 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/276d6e96-87af-451e-80a6-0267847f5760-utilities\") pod \"redhat-operators-xvd8x\" (UID: \"276d6e96-87af-451e-80a6-0267847f5760\") " pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.822229 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/276d6e96-87af-451e-80a6-0267847f5760-catalog-content\") pod \"redhat-operators-xvd8x\" (UID: \"276d6e96-87af-451e-80a6-0267847f5760\") " pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.822719 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/276d6e96-87af-451e-80a6-0267847f5760-utilities\") pod \"redhat-operators-xvd8x\" (UID: \"276d6e96-87af-451e-80a6-0267847f5760\") " pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.849006 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtmq8\" (UniqueName: \"kubernetes.io/projected/276d6e96-87af-451e-80a6-0267847f5760-kube-api-access-wtmq8\") pod \"redhat-operators-xvd8x\" (UID: \"276d6e96-87af-451e-80a6-0267847f5760\") " pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:13:34 crc kubenswrapper[4678]: I1124 12:13:34.946009 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:13:35 crc kubenswrapper[4678]: I1124 12:13:35.473055 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvd8x"] Nov 24 12:13:35 crc kubenswrapper[4678]: I1124 12:13:35.699001 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvd8x" event={"ID":"276d6e96-87af-451e-80a6-0267847f5760","Type":"ContainerStarted","Data":"fe1d7e4b5764a3da76e04915985973cbdebf19e42c1f7310ed8fb1dd26699f4a"} Nov 24 12:13:36 crc kubenswrapper[4678]: I1124 12:13:36.716475 4678 generic.go:334] "Generic (PLEG): container finished" podID="276d6e96-87af-451e-80a6-0267847f5760" containerID="ab8e6ba537829d73957b9bc7917c17cb878668d3ed5608cfd0363978b262fce1" exitCode=0 Nov 24 12:13:36 crc kubenswrapper[4678]: I1124 12:13:36.717196 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvd8x" event={"ID":"276d6e96-87af-451e-80a6-0267847f5760","Type":"ContainerDied","Data":"ab8e6ba537829d73957b9bc7917c17cb878668d3ed5608cfd0363978b262fce1"} Nov 24 12:13:38 crc kubenswrapper[4678]: I1124 12:13:38.748221 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvd8x" event={"ID":"276d6e96-87af-451e-80a6-0267847f5760","Type":"ContainerStarted","Data":"d57eec18b31a34e5ed75aeacab581543357acddcd558728475d32975a56b101e"} Nov 24 12:13:51 crc kubenswrapper[4678]: I1124 12:13:51.905828 4678 generic.go:334] "Generic (PLEG): container finished" podID="276d6e96-87af-451e-80a6-0267847f5760" containerID="d57eec18b31a34e5ed75aeacab581543357acddcd558728475d32975a56b101e" exitCode=0 Nov 24 12:13:51 crc kubenswrapper[4678]: I1124 12:13:51.909287 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvd8x" event={"ID":"276d6e96-87af-451e-80a6-0267847f5760","Type":"ContainerDied","Data":"d57eec18b31a34e5ed75aeacab581543357acddcd558728475d32975a56b101e"} Nov 24 12:13:53 crc kubenswrapper[4678]: I1124 12:13:53.951952 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvd8x" event={"ID":"276d6e96-87af-451e-80a6-0267847f5760","Type":"ContainerStarted","Data":"9b735dc0ebd1c2ea6319b0b51e7edd0b0a53ca920ace68ad63b487422066cd0e"} Nov 24 12:13:53 crc kubenswrapper[4678]: I1124 12:13:53.989318 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xvd8x" podStartSLOduration=3.987941676 podStartE2EDuration="19.989299046s" podCreationTimestamp="2025-11-24 12:13:34 +0000 UTC" firstStartedPulling="2025-11-24 12:13:36.720014222 +0000 UTC m=+3427.651073861" lastFinishedPulling="2025-11-24 12:13:52.721371592 +0000 UTC m=+3443.652431231" observedRunningTime="2025-11-24 12:13:53.977291223 +0000 UTC m=+3444.908350872" watchObservedRunningTime="2025-11-24 12:13:53.989299046 +0000 UTC m=+3444.920358685" Nov 24 12:13:54 crc kubenswrapper[4678]: I1124 12:13:54.948777 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:13:54 crc kubenswrapper[4678]: I1124 12:13:54.948845 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:13:55 crc kubenswrapper[4678]: I1124 12:13:55.998865 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xvd8x" podUID="276d6e96-87af-451e-80a6-0267847f5760" containerName="registry-server" probeResult="failure" output=< Nov 24 12:13:55 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:13:55 crc kubenswrapper[4678]: > Nov 24 12:14:06 crc kubenswrapper[4678]: I1124 12:14:06.002304 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xvd8x" podUID="276d6e96-87af-451e-80a6-0267847f5760" containerName="registry-server" probeResult="failure" output=< Nov 24 12:14:06 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:14:06 crc kubenswrapper[4678]: > Nov 24 12:14:16 crc kubenswrapper[4678]: I1124 12:14:16.002269 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xvd8x" podUID="276d6e96-87af-451e-80a6-0267847f5760" containerName="registry-server" probeResult="failure" output=< Nov 24 12:14:16 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:14:16 crc kubenswrapper[4678]: > Nov 24 12:14:26 crc kubenswrapper[4678]: I1124 12:14:26.003085 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xvd8x" podUID="276d6e96-87af-451e-80a6-0267847f5760" containerName="registry-server" probeResult="failure" output=< Nov 24 12:14:26 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:14:26 crc kubenswrapper[4678]: > Nov 24 12:14:30 crc kubenswrapper[4678]: I1124 12:14:30.297035 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:14:30 crc kubenswrapper[4678]: I1124 12:14:30.297592 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:14:35 crc kubenswrapper[4678]: I1124 12:14:34.999784 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:14:35 crc kubenswrapper[4678]: I1124 12:14:35.069922 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:14:35 crc kubenswrapper[4678]: I1124 12:14:35.838139 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xvd8x"] Nov 24 12:14:36 crc kubenswrapper[4678]: I1124 12:14:36.544567 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xvd8x" podUID="276d6e96-87af-451e-80a6-0267847f5760" containerName="registry-server" containerID="cri-o://9b735dc0ebd1c2ea6319b0b51e7edd0b0a53ca920ace68ad63b487422066cd0e" gracePeriod=2 Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.182315 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.236262 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/276d6e96-87af-451e-80a6-0267847f5760-catalog-content\") pod \"276d6e96-87af-451e-80a6-0267847f5760\" (UID: \"276d6e96-87af-451e-80a6-0267847f5760\") " Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.236500 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/276d6e96-87af-451e-80a6-0267847f5760-utilities\") pod \"276d6e96-87af-451e-80a6-0267847f5760\" (UID: \"276d6e96-87af-451e-80a6-0267847f5760\") " Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.237034 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtmq8\" (UniqueName: \"kubernetes.io/projected/276d6e96-87af-451e-80a6-0267847f5760-kube-api-access-wtmq8\") pod \"276d6e96-87af-451e-80a6-0267847f5760\" (UID: \"276d6e96-87af-451e-80a6-0267847f5760\") " Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.237782 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/276d6e96-87af-451e-80a6-0267847f5760-utilities" (OuterVolumeSpecName: "utilities") pod "276d6e96-87af-451e-80a6-0267847f5760" (UID: "276d6e96-87af-451e-80a6-0267847f5760"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.237921 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/276d6e96-87af-451e-80a6-0267847f5760-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.252254 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/276d6e96-87af-451e-80a6-0267847f5760-kube-api-access-wtmq8" (OuterVolumeSpecName: "kube-api-access-wtmq8") pod "276d6e96-87af-451e-80a6-0267847f5760" (UID: "276d6e96-87af-451e-80a6-0267847f5760"). InnerVolumeSpecName "kube-api-access-wtmq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.338598 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/276d6e96-87af-451e-80a6-0267847f5760-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "276d6e96-87af-451e-80a6-0267847f5760" (UID: "276d6e96-87af-451e-80a6-0267847f5760"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.339127 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtmq8\" (UniqueName: \"kubernetes.io/projected/276d6e96-87af-451e-80a6-0267847f5760-kube-api-access-wtmq8\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.339152 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/276d6e96-87af-451e-80a6-0267847f5760-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.559025 4678 generic.go:334] "Generic (PLEG): container finished" podID="276d6e96-87af-451e-80a6-0267847f5760" containerID="9b735dc0ebd1c2ea6319b0b51e7edd0b0a53ca920ace68ad63b487422066cd0e" exitCode=0 Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.559096 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvd8x" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.559090 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvd8x" event={"ID":"276d6e96-87af-451e-80a6-0267847f5760","Type":"ContainerDied","Data":"9b735dc0ebd1c2ea6319b0b51e7edd0b0a53ca920ace68ad63b487422066cd0e"} Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.559156 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvd8x" event={"ID":"276d6e96-87af-451e-80a6-0267847f5760","Type":"ContainerDied","Data":"fe1d7e4b5764a3da76e04915985973cbdebf19e42c1f7310ed8fb1dd26699f4a"} Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.559176 4678 scope.go:117] "RemoveContainer" containerID="9b735dc0ebd1c2ea6319b0b51e7edd0b0a53ca920ace68ad63b487422066cd0e" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.598546 4678 scope.go:117] "RemoveContainer" containerID="d57eec18b31a34e5ed75aeacab581543357acddcd558728475d32975a56b101e" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.616947 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xvd8x"] Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.632563 4678 scope.go:117] "RemoveContainer" containerID="ab8e6ba537829d73957b9bc7917c17cb878668d3ed5608cfd0363978b262fce1" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.633013 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xvd8x"] Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.702840 4678 scope.go:117] "RemoveContainer" containerID="9b735dc0ebd1c2ea6319b0b51e7edd0b0a53ca920ace68ad63b487422066cd0e" Nov 24 12:14:37 crc kubenswrapper[4678]: E1124 12:14:37.703544 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b735dc0ebd1c2ea6319b0b51e7edd0b0a53ca920ace68ad63b487422066cd0e\": container with ID starting with 9b735dc0ebd1c2ea6319b0b51e7edd0b0a53ca920ace68ad63b487422066cd0e not found: ID does not exist" containerID="9b735dc0ebd1c2ea6319b0b51e7edd0b0a53ca920ace68ad63b487422066cd0e" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.703609 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b735dc0ebd1c2ea6319b0b51e7edd0b0a53ca920ace68ad63b487422066cd0e"} err="failed to get container status \"9b735dc0ebd1c2ea6319b0b51e7edd0b0a53ca920ace68ad63b487422066cd0e\": rpc error: code = NotFound desc = could not find container \"9b735dc0ebd1c2ea6319b0b51e7edd0b0a53ca920ace68ad63b487422066cd0e\": container with ID starting with 9b735dc0ebd1c2ea6319b0b51e7edd0b0a53ca920ace68ad63b487422066cd0e not found: ID does not exist" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.703648 4678 scope.go:117] "RemoveContainer" containerID="d57eec18b31a34e5ed75aeacab581543357acddcd558728475d32975a56b101e" Nov 24 12:14:37 crc kubenswrapper[4678]: E1124 12:14:37.704396 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d57eec18b31a34e5ed75aeacab581543357acddcd558728475d32975a56b101e\": container with ID starting with d57eec18b31a34e5ed75aeacab581543357acddcd558728475d32975a56b101e not found: ID does not exist" containerID="d57eec18b31a34e5ed75aeacab581543357acddcd558728475d32975a56b101e" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.704447 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57eec18b31a34e5ed75aeacab581543357acddcd558728475d32975a56b101e"} err="failed to get container status \"d57eec18b31a34e5ed75aeacab581543357acddcd558728475d32975a56b101e\": rpc error: code = NotFound desc = could not find container \"d57eec18b31a34e5ed75aeacab581543357acddcd558728475d32975a56b101e\": container with ID starting with d57eec18b31a34e5ed75aeacab581543357acddcd558728475d32975a56b101e not found: ID does not exist" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.704483 4678 scope.go:117] "RemoveContainer" containerID="ab8e6ba537829d73957b9bc7917c17cb878668d3ed5608cfd0363978b262fce1" Nov 24 12:14:37 crc kubenswrapper[4678]: E1124 12:14:37.705041 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab8e6ba537829d73957b9bc7917c17cb878668d3ed5608cfd0363978b262fce1\": container with ID starting with ab8e6ba537829d73957b9bc7917c17cb878668d3ed5608cfd0363978b262fce1 not found: ID does not exist" containerID="ab8e6ba537829d73957b9bc7917c17cb878668d3ed5608cfd0363978b262fce1" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.705079 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab8e6ba537829d73957b9bc7917c17cb878668d3ed5608cfd0363978b262fce1"} err="failed to get container status \"ab8e6ba537829d73957b9bc7917c17cb878668d3ed5608cfd0363978b262fce1\": rpc error: code = NotFound desc = could not find container \"ab8e6ba537829d73957b9bc7917c17cb878668d3ed5608cfd0363978b262fce1\": container with ID starting with ab8e6ba537829d73957b9bc7917c17cb878668d3ed5608cfd0363978b262fce1 not found: ID does not exist" Nov 24 12:14:37 crc kubenswrapper[4678]: I1124 12:14:37.915208 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="276d6e96-87af-451e-80a6-0267847f5760" path="/var/lib/kubelet/pods/276d6e96-87af-451e-80a6-0267847f5760/volumes" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.168403 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj"] Nov 24 12:15:00 crc kubenswrapper[4678]: E1124 12:15:00.169911 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="276d6e96-87af-451e-80a6-0267847f5760" containerName="registry-server" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.169935 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="276d6e96-87af-451e-80a6-0267847f5760" containerName="registry-server" Nov 24 12:15:00 crc kubenswrapper[4678]: E1124 12:15:00.169970 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="276d6e96-87af-451e-80a6-0267847f5760" containerName="extract-content" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.169983 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="276d6e96-87af-451e-80a6-0267847f5760" containerName="extract-content" Nov 24 12:15:00 crc kubenswrapper[4678]: E1124 12:15:00.170014 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="276d6e96-87af-451e-80a6-0267847f5760" containerName="extract-utilities" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.170024 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="276d6e96-87af-451e-80a6-0267847f5760" containerName="extract-utilities" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.170332 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="276d6e96-87af-451e-80a6-0267847f5760" containerName="registry-server" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.171491 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.173873 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.174625 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.180038 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj"] Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.262406 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-config-volume\") pod \"collect-profiles-29399775-hjclj\" (UID: \"2e0a4ab5-38c5-44ee-b039-609c6a3589f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.262512 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-secret-volume\") pod \"collect-profiles-29399775-hjclj\" (UID: \"2e0a4ab5-38c5-44ee-b039-609c6a3589f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.262531 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v4fx\" (UniqueName: \"kubernetes.io/projected/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-kube-api-access-7v4fx\") pod \"collect-profiles-29399775-hjclj\" (UID: \"2e0a4ab5-38c5-44ee-b039-609c6a3589f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.297376 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.297445 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.366082 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-config-volume\") pod \"collect-profiles-29399775-hjclj\" (UID: \"2e0a4ab5-38c5-44ee-b039-609c6a3589f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.366211 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-secret-volume\") pod \"collect-profiles-29399775-hjclj\" (UID: \"2e0a4ab5-38c5-44ee-b039-609c6a3589f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.366232 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v4fx\" (UniqueName: \"kubernetes.io/projected/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-kube-api-access-7v4fx\") pod \"collect-profiles-29399775-hjclj\" (UID: \"2e0a4ab5-38c5-44ee-b039-609c6a3589f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.367486 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-config-volume\") pod \"collect-profiles-29399775-hjclj\" (UID: \"2e0a4ab5-38c5-44ee-b039-609c6a3589f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.374617 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-secret-volume\") pod \"collect-profiles-29399775-hjclj\" (UID: \"2e0a4ab5-38c5-44ee-b039-609c6a3589f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.386214 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v4fx\" (UniqueName: \"kubernetes.io/projected/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-kube-api-access-7v4fx\") pod \"collect-profiles-29399775-hjclj\" (UID: \"2e0a4ab5-38c5-44ee-b039-609c6a3589f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.499285 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" Nov 24 12:15:00 crc kubenswrapper[4678]: I1124 12:15:00.982881 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj"] Nov 24 12:15:01 crc kubenswrapper[4678]: I1124 12:15:01.831492 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" event={"ID":"2e0a4ab5-38c5-44ee-b039-609c6a3589f4","Type":"ContainerStarted","Data":"03a7f6a18116ef5ce44d4b8a06c75ac3aac41b5e650dab83e01b1c38bbd55bc5"} Nov 24 12:15:01 crc kubenswrapper[4678]: I1124 12:15:01.831868 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" event={"ID":"2e0a4ab5-38c5-44ee-b039-609c6a3589f4","Type":"ContainerStarted","Data":"e00160df4758671833490d51f9f7cecf96905d23cb7b12ef2aaa57845c875bbf"} Nov 24 12:15:01 crc kubenswrapper[4678]: I1124 12:15:01.860457 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" podStartSLOduration=1.860434389 podStartE2EDuration="1.860434389s" podCreationTimestamp="2025-11-24 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:15:01.853112672 +0000 UTC m=+3512.784172311" watchObservedRunningTime="2025-11-24 12:15:01.860434389 +0000 UTC m=+3512.791494028" Nov 24 12:15:02 crc kubenswrapper[4678]: I1124 12:15:02.861496 4678 generic.go:334] "Generic (PLEG): container finished" podID="2e0a4ab5-38c5-44ee-b039-609c6a3589f4" containerID="03a7f6a18116ef5ce44d4b8a06c75ac3aac41b5e650dab83e01b1c38bbd55bc5" exitCode=0 Nov 24 12:15:02 crc kubenswrapper[4678]: I1124 12:15:02.861740 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" event={"ID":"2e0a4ab5-38c5-44ee-b039-609c6a3589f4","Type":"ContainerDied","Data":"03a7f6a18116ef5ce44d4b8a06c75ac3aac41b5e650dab83e01b1c38bbd55bc5"} Nov 24 12:15:04 crc kubenswrapper[4678]: I1124 12:15:04.292467 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" Nov 24 12:15:04 crc kubenswrapper[4678]: I1124 12:15:04.375590 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-secret-volume\") pod \"2e0a4ab5-38c5-44ee-b039-609c6a3589f4\" (UID: \"2e0a4ab5-38c5-44ee-b039-609c6a3589f4\") " Nov 24 12:15:04 crc kubenswrapper[4678]: I1124 12:15:04.376093 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v4fx\" (UniqueName: \"kubernetes.io/projected/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-kube-api-access-7v4fx\") pod \"2e0a4ab5-38c5-44ee-b039-609c6a3589f4\" (UID: \"2e0a4ab5-38c5-44ee-b039-609c6a3589f4\") " Nov 24 12:15:04 crc kubenswrapper[4678]: I1124 12:15:04.376131 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-config-volume\") pod \"2e0a4ab5-38c5-44ee-b039-609c6a3589f4\" (UID: \"2e0a4ab5-38c5-44ee-b039-609c6a3589f4\") " Nov 24 12:15:04 crc kubenswrapper[4678]: I1124 12:15:04.377012 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-config-volume" (OuterVolumeSpecName: "config-volume") pod "2e0a4ab5-38c5-44ee-b039-609c6a3589f4" (UID: "2e0a4ab5-38c5-44ee-b039-609c6a3589f4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:15:04 crc kubenswrapper[4678]: I1124 12:15:04.382774 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2e0a4ab5-38c5-44ee-b039-609c6a3589f4" (UID: "2e0a4ab5-38c5-44ee-b039-609c6a3589f4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:15:04 crc kubenswrapper[4678]: I1124 12:15:04.383607 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-kube-api-access-7v4fx" (OuterVolumeSpecName: "kube-api-access-7v4fx") pod "2e0a4ab5-38c5-44ee-b039-609c6a3589f4" (UID: "2e0a4ab5-38c5-44ee-b039-609c6a3589f4"). InnerVolumeSpecName "kube-api-access-7v4fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:15:04 crc kubenswrapper[4678]: I1124 12:15:04.479454 4678 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:04 crc kubenswrapper[4678]: I1124 12:15:04.479495 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v4fx\" (UniqueName: \"kubernetes.io/projected/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-kube-api-access-7v4fx\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:04 crc kubenswrapper[4678]: I1124 12:15:04.479505 4678 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e0a4ab5-38c5-44ee-b039-609c6a3589f4-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:15:04 crc kubenswrapper[4678]: I1124 12:15:04.885485 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" event={"ID":"2e0a4ab5-38c5-44ee-b039-609c6a3589f4","Type":"ContainerDied","Data":"e00160df4758671833490d51f9f7cecf96905d23cb7b12ef2aaa57845c875bbf"} Nov 24 12:15:04 crc kubenswrapper[4678]: I1124 12:15:04.885529 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e00160df4758671833490d51f9f7cecf96905d23cb7b12ef2aaa57845c875bbf" Nov 24 12:15:04 crc kubenswrapper[4678]: I1124 12:15:04.885570 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj" Nov 24 12:15:04 crc kubenswrapper[4678]: I1124 12:15:04.931679 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb"] Nov 24 12:15:04 crc kubenswrapper[4678]: I1124 12:15:04.941796 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399730-xppbb"] Nov 24 12:15:05 crc kubenswrapper[4678]: I1124 12:15:05.914631 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12469164-9579-47b7-8b32-2cf4fd1cb806" path="/var/lib/kubelet/pods/12469164-9579-47b7-8b32-2cf4fd1cb806/volumes" Nov 24 12:15:30 crc kubenswrapper[4678]: I1124 12:15:30.297484 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:15:30 crc kubenswrapper[4678]: I1124 12:15:30.298120 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:15:30 crc kubenswrapper[4678]: I1124 12:15:30.298178 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 12:15:30 crc kubenswrapper[4678]: I1124 12:15:30.299257 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:15:30 crc kubenswrapper[4678]: I1124 12:15:30.299325 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" gracePeriod=600 Nov 24 12:15:30 crc kubenswrapper[4678]: E1124 12:15:30.992034 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:15:31 crc kubenswrapper[4678]: I1124 12:15:31.180905 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" exitCode=0 Nov 24 12:15:31 crc kubenswrapper[4678]: I1124 12:15:31.180966 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614"} Nov 24 12:15:31 crc kubenswrapper[4678]: I1124 12:15:31.181012 4678 scope.go:117] "RemoveContainer" containerID="d4dd29508d8e0bdb527834c0803c6c584ca7d2f5db4eb1981ddbeb49f842bb0e" Nov 24 12:15:31 crc kubenswrapper[4678]: I1124 12:15:31.181754 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:15:31 crc kubenswrapper[4678]: E1124 12:15:31.182119 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:15:38 crc kubenswrapper[4678]: I1124 12:15:38.340214 4678 scope.go:117] "RemoveContainer" containerID="97950105bad1d54bdda021339de36b3cc48a460a1c3bc09ae1a1c75662e2f740" Nov 24 12:15:43 crc kubenswrapper[4678]: I1124 12:15:43.896421 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:15:43 crc kubenswrapper[4678]: E1124 12:15:43.897264 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:15:57 crc kubenswrapper[4678]: I1124 12:15:57.896308 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:15:57 crc kubenswrapper[4678]: E1124 12:15:57.897286 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:16:09 crc kubenswrapper[4678]: I1124 12:16:09.905304 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:16:09 crc kubenswrapper[4678]: E1124 12:16:09.906699 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:16:24 crc kubenswrapper[4678]: I1124 12:16:24.896150 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:16:24 crc kubenswrapper[4678]: E1124 12:16:24.897334 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:16:35 crc kubenswrapper[4678]: I1124 12:16:35.896372 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:16:35 crc kubenswrapper[4678]: E1124 12:16:35.897135 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:16:46 crc kubenswrapper[4678]: I1124 12:16:46.896188 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:16:46 crc kubenswrapper[4678]: E1124 12:16:46.897640 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:17:01 crc kubenswrapper[4678]: I1124 12:17:01.899029 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:17:01 crc kubenswrapper[4678]: E1124 12:17:01.900265 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:17:15 crc kubenswrapper[4678]: I1124 12:17:15.897130 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:17:15 crc kubenswrapper[4678]: E1124 12:17:15.899291 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:17:29 crc kubenswrapper[4678]: I1124 12:17:29.907515 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:17:29 crc kubenswrapper[4678]: E1124 12:17:29.908262 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:17:41 crc kubenswrapper[4678]: I1124 12:17:41.904183 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:17:41 crc kubenswrapper[4678]: E1124 12:17:41.905179 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:17:53 crc kubenswrapper[4678]: I1124 12:17:53.898315 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:17:53 crc kubenswrapper[4678]: E1124 12:17:53.899498 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:18:07 crc kubenswrapper[4678]: I1124 12:18:07.896766 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:18:07 crc kubenswrapper[4678]: E1124 12:18:07.898488 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:18:20 crc kubenswrapper[4678]: I1124 12:18:20.896329 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:18:20 crc kubenswrapper[4678]: E1124 12:18:20.897368 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:18:33 crc kubenswrapper[4678]: I1124 12:18:33.896786 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:18:33 crc kubenswrapper[4678]: E1124 12:18:33.897613 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:18:45 crc kubenswrapper[4678]: I1124 12:18:45.898183 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:18:45 crc kubenswrapper[4678]: E1124 12:18:45.899418 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:18:57 crc kubenswrapper[4678]: I1124 12:18:57.896425 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:18:57 crc kubenswrapper[4678]: E1124 12:18:57.897333 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:19:09 crc kubenswrapper[4678]: I1124 12:19:09.906378 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:19:09 crc kubenswrapper[4678]: E1124 12:19:09.908460 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:19:24 crc kubenswrapper[4678]: I1124 12:19:24.896535 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:19:24 crc kubenswrapper[4678]: E1124 12:19:24.897341 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:19:36 crc kubenswrapper[4678]: I1124 12:19:36.896212 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:19:36 crc kubenswrapper[4678]: E1124 12:19:36.897068 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:19:50 crc kubenswrapper[4678]: I1124 12:19:50.896955 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:19:50 crc kubenswrapper[4678]: E1124 12:19:50.897762 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:20:05 crc kubenswrapper[4678]: I1124 12:20:05.896724 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:20:05 crc kubenswrapper[4678]: E1124 12:20:05.898165 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:20:20 crc kubenswrapper[4678]: I1124 12:20:20.897422 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:20:20 crc kubenswrapper[4678]: E1124 12:20:20.900884 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:20:31 crc kubenswrapper[4678]: I1124 12:20:31.897034 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:20:32 crc kubenswrapper[4678]: I1124 12:20:32.909284 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"16489c56369f7c537f307c201030a2a1cdbb657958c81b32bb3a0e8ddbf7ba5b"} Nov 24 12:22:48 crc kubenswrapper[4678]: I1124 12:22:48.684286 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m2fm8"] Nov 24 12:22:48 crc kubenswrapper[4678]: E1124 12:22:48.685297 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0a4ab5-38c5-44ee-b039-609c6a3589f4" containerName="collect-profiles" Nov 24 12:22:48 crc kubenswrapper[4678]: I1124 12:22:48.685312 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0a4ab5-38c5-44ee-b039-609c6a3589f4" containerName="collect-profiles" Nov 24 12:22:48 crc kubenswrapper[4678]: I1124 12:22:48.685609 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0a4ab5-38c5-44ee-b039-609c6a3589f4" containerName="collect-profiles" Nov 24 12:22:48 crc kubenswrapper[4678]: I1124 12:22:48.687549 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:22:48 crc kubenswrapper[4678]: I1124 12:22:48.698416 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2fm8"] Nov 24 12:22:48 crc kubenswrapper[4678]: I1124 12:22:48.849517 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/148de679-89c3-4784-85f7-756b915a91e6-utilities\") pod \"redhat-marketplace-m2fm8\" (UID: \"148de679-89c3-4784-85f7-756b915a91e6\") " pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:22:48 crc kubenswrapper[4678]: I1124 12:22:48.849625 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fztss\" (UniqueName: \"kubernetes.io/projected/148de679-89c3-4784-85f7-756b915a91e6-kube-api-access-fztss\") pod \"redhat-marketplace-m2fm8\" (UID: \"148de679-89c3-4784-85f7-756b915a91e6\") " pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:22:48 crc kubenswrapper[4678]: I1124 12:22:48.849766 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/148de679-89c3-4784-85f7-756b915a91e6-catalog-content\") pod \"redhat-marketplace-m2fm8\" (UID: \"148de679-89c3-4784-85f7-756b915a91e6\") " pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:22:48 crc kubenswrapper[4678]: I1124 12:22:48.953275 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/148de679-89c3-4784-85f7-756b915a91e6-utilities\") pod \"redhat-marketplace-m2fm8\" (UID: \"148de679-89c3-4784-85f7-756b915a91e6\") " pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:22:48 crc kubenswrapper[4678]: I1124 12:22:48.953386 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fztss\" (UniqueName: \"kubernetes.io/projected/148de679-89c3-4784-85f7-756b915a91e6-kube-api-access-fztss\") pod \"redhat-marketplace-m2fm8\" (UID: \"148de679-89c3-4784-85f7-756b915a91e6\") " pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:22:48 crc kubenswrapper[4678]: I1124 12:22:48.953442 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/148de679-89c3-4784-85f7-756b915a91e6-catalog-content\") pod \"redhat-marketplace-m2fm8\" (UID: \"148de679-89c3-4784-85f7-756b915a91e6\") " pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:22:48 crc kubenswrapper[4678]: I1124 12:22:48.953940 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/148de679-89c3-4784-85f7-756b915a91e6-utilities\") pod \"redhat-marketplace-m2fm8\" (UID: \"148de679-89c3-4784-85f7-756b915a91e6\") " pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:22:48 crc kubenswrapper[4678]: I1124 12:22:48.954025 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/148de679-89c3-4784-85f7-756b915a91e6-catalog-content\") pod \"redhat-marketplace-m2fm8\" (UID: \"148de679-89c3-4784-85f7-756b915a91e6\") " pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:22:48 crc kubenswrapper[4678]: I1124 12:22:48.974552 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fztss\" (UniqueName: \"kubernetes.io/projected/148de679-89c3-4784-85f7-756b915a91e6-kube-api-access-fztss\") pod \"redhat-marketplace-m2fm8\" (UID: \"148de679-89c3-4784-85f7-756b915a91e6\") " pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:22:49 crc kubenswrapper[4678]: I1124 12:22:49.010164 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:22:49 crc kubenswrapper[4678]: I1124 12:22:49.938616 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2fm8"] Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.492015 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sjxvj"] Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.495374 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.525209 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sjxvj"] Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.570122 4678 generic.go:334] "Generic (PLEG): container finished" podID="148de679-89c3-4784-85f7-756b915a91e6" containerID="2c25bdee2636c51a6fafecd6a48ebfa5d387e58057b94fb7445d18c95fc2f85e" exitCode=0 Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.570176 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2fm8" event={"ID":"148de679-89c3-4784-85f7-756b915a91e6","Type":"ContainerDied","Data":"2c25bdee2636c51a6fafecd6a48ebfa5d387e58057b94fb7445d18c95fc2f85e"} Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.570546 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2fm8" event={"ID":"148de679-89c3-4784-85f7-756b915a91e6","Type":"ContainerStarted","Data":"ec9869050dccd7d160e636d0a58bd58c125a55d4af418a1b7e4787f0a85ae2b8"} Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.572859 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.600569 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t85bg\" (UniqueName: \"kubernetes.io/projected/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-kube-api-access-t85bg\") pod \"community-operators-sjxvj\" (UID: \"aa96cd29-5c94-4247-bac0-9ae04bdb3c72\") " pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.600640 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-utilities\") pod \"community-operators-sjxvj\" (UID: \"aa96cd29-5c94-4247-bac0-9ae04bdb3c72\") " pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.600832 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-catalog-content\") pod \"community-operators-sjxvj\" (UID: \"aa96cd29-5c94-4247-bac0-9ae04bdb3c72\") " pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.704339 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t85bg\" (UniqueName: \"kubernetes.io/projected/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-kube-api-access-t85bg\") pod \"community-operators-sjxvj\" (UID: \"aa96cd29-5c94-4247-bac0-9ae04bdb3c72\") " pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.704407 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-utilities\") pod \"community-operators-sjxvj\" (UID: \"aa96cd29-5c94-4247-bac0-9ae04bdb3c72\") " pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.704557 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-catalog-content\") pod \"community-operators-sjxvj\" (UID: \"aa96cd29-5c94-4247-bac0-9ae04bdb3c72\") " pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.706436 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-catalog-content\") pod \"community-operators-sjxvj\" (UID: \"aa96cd29-5c94-4247-bac0-9ae04bdb3c72\") " pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.706524 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-utilities\") pod \"community-operators-sjxvj\" (UID: \"aa96cd29-5c94-4247-bac0-9ae04bdb3c72\") " pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.734379 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t85bg\" (UniqueName: \"kubernetes.io/projected/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-kube-api-access-t85bg\") pod \"community-operators-sjxvj\" (UID: \"aa96cd29-5c94-4247-bac0-9ae04bdb3c72\") " pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:22:50 crc kubenswrapper[4678]: I1124 12:22:50.824907 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:22:51 crc kubenswrapper[4678]: I1124 12:22:51.112436 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gx7hl"] Nov 24 12:22:51 crc kubenswrapper[4678]: I1124 12:22:51.118732 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:22:51 crc kubenswrapper[4678]: I1124 12:22:51.129097 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gx7hl"] Nov 24 12:22:51 crc kubenswrapper[4678]: I1124 12:22:51.222302 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvsbs\" (UniqueName: \"kubernetes.io/projected/c343f8b1-864a-4ac7-81ca-6faab22498bf-kube-api-access-wvsbs\") pod \"certified-operators-gx7hl\" (UID: \"c343f8b1-864a-4ac7-81ca-6faab22498bf\") " pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:22:51 crc kubenswrapper[4678]: I1124 12:22:51.222474 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c343f8b1-864a-4ac7-81ca-6faab22498bf-utilities\") pod \"certified-operators-gx7hl\" (UID: \"c343f8b1-864a-4ac7-81ca-6faab22498bf\") " pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:22:51 crc kubenswrapper[4678]: I1124 12:22:51.222543 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c343f8b1-864a-4ac7-81ca-6faab22498bf-catalog-content\") pod \"certified-operators-gx7hl\" (UID: \"c343f8b1-864a-4ac7-81ca-6faab22498bf\") " pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:22:51 crc kubenswrapper[4678]: I1124 12:22:51.325105 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c343f8b1-864a-4ac7-81ca-6faab22498bf-utilities\") pod \"certified-operators-gx7hl\" (UID: \"c343f8b1-864a-4ac7-81ca-6faab22498bf\") " pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:22:51 crc kubenswrapper[4678]: I1124 12:22:51.325206 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c343f8b1-864a-4ac7-81ca-6faab22498bf-catalog-content\") pod \"certified-operators-gx7hl\" (UID: \"c343f8b1-864a-4ac7-81ca-6faab22498bf\") " pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:22:51 crc kubenswrapper[4678]: I1124 12:22:51.325331 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvsbs\" (UniqueName: \"kubernetes.io/projected/c343f8b1-864a-4ac7-81ca-6faab22498bf-kube-api-access-wvsbs\") pod \"certified-operators-gx7hl\" (UID: \"c343f8b1-864a-4ac7-81ca-6faab22498bf\") " pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:22:51 crc kubenswrapper[4678]: I1124 12:22:51.326141 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c343f8b1-864a-4ac7-81ca-6faab22498bf-utilities\") pod \"certified-operators-gx7hl\" (UID: \"c343f8b1-864a-4ac7-81ca-6faab22498bf\") " pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:22:51 crc kubenswrapper[4678]: I1124 12:22:51.326156 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c343f8b1-864a-4ac7-81ca-6faab22498bf-catalog-content\") pod \"certified-operators-gx7hl\" (UID: \"c343f8b1-864a-4ac7-81ca-6faab22498bf\") " pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:22:51 crc kubenswrapper[4678]: I1124 12:22:51.348422 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvsbs\" (UniqueName: \"kubernetes.io/projected/c343f8b1-864a-4ac7-81ca-6faab22498bf-kube-api-access-wvsbs\") pod \"certified-operators-gx7hl\" (UID: \"c343f8b1-864a-4ac7-81ca-6faab22498bf\") " pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:22:51 crc kubenswrapper[4678]: I1124 12:22:51.456286 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:22:51 crc kubenswrapper[4678]: I1124 12:22:51.548996 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sjxvj"] Nov 24 12:22:51 crc kubenswrapper[4678]: W1124 12:22:51.556844 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa96cd29_5c94_4247_bac0_9ae04bdb3c72.slice/crio-5b1d36ab91a44b4fe595fda94c520140827930f986679b87ccc423417f334e07 WatchSource:0}: Error finding container 5b1d36ab91a44b4fe595fda94c520140827930f986679b87ccc423417f334e07: Status 404 returned error can't find the container with id 5b1d36ab91a44b4fe595fda94c520140827930f986679b87ccc423417f334e07 Nov 24 12:22:51 crc kubenswrapper[4678]: I1124 12:22:51.594326 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjxvj" event={"ID":"aa96cd29-5c94-4247-bac0-9ae04bdb3c72","Type":"ContainerStarted","Data":"5b1d36ab91a44b4fe595fda94c520140827930f986679b87ccc423417f334e07"} Nov 24 12:22:52 crc kubenswrapper[4678]: I1124 12:22:52.095542 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gx7hl"] Nov 24 12:22:52 crc kubenswrapper[4678]: W1124 12:22:52.096294 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc343f8b1_864a_4ac7_81ca_6faab22498bf.slice/crio-3a15f8cced9110d22205504fedc885cd5732f0695944c9e2abedd6d4c2ae74e1 WatchSource:0}: Error finding container 3a15f8cced9110d22205504fedc885cd5732f0695944c9e2abedd6d4c2ae74e1: Status 404 returned error can't find the container with id 3a15f8cced9110d22205504fedc885cd5732f0695944c9e2abedd6d4c2ae74e1 Nov 24 12:22:52 crc kubenswrapper[4678]: I1124 12:22:52.606323 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2fm8" event={"ID":"148de679-89c3-4784-85f7-756b915a91e6","Type":"ContainerStarted","Data":"249a9b1b41f3ef205b3403462b806f785e48faa4df15bf16d6f0cc715ab69352"} Nov 24 12:22:52 crc kubenswrapper[4678]: I1124 12:22:52.608706 4678 generic.go:334] "Generic (PLEG): container finished" podID="c343f8b1-864a-4ac7-81ca-6faab22498bf" containerID="79484bb061dcbd4934d4b37203415511052d409ba4ea8478ef5c3b5413613d7a" exitCode=0 Nov 24 12:22:52 crc kubenswrapper[4678]: I1124 12:22:52.608850 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gx7hl" event={"ID":"c343f8b1-864a-4ac7-81ca-6faab22498bf","Type":"ContainerDied","Data":"79484bb061dcbd4934d4b37203415511052d409ba4ea8478ef5c3b5413613d7a"} Nov 24 12:22:52 crc kubenswrapper[4678]: I1124 12:22:52.608886 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gx7hl" event={"ID":"c343f8b1-864a-4ac7-81ca-6faab22498bf","Type":"ContainerStarted","Data":"3a15f8cced9110d22205504fedc885cd5732f0695944c9e2abedd6d4c2ae74e1"} Nov 24 12:22:52 crc kubenswrapper[4678]: I1124 12:22:52.612173 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjxvj" event={"ID":"aa96cd29-5c94-4247-bac0-9ae04bdb3c72","Type":"ContainerDied","Data":"9fecba5b8c4605c81cfbb33ecc5f8c3dc04fb8a554bdac8e5960fd19195dd7d2"} Nov 24 12:22:52 crc kubenswrapper[4678]: I1124 12:22:52.612384 4678 generic.go:334] "Generic (PLEG): container finished" podID="aa96cd29-5c94-4247-bac0-9ae04bdb3c72" containerID="9fecba5b8c4605c81cfbb33ecc5f8c3dc04fb8a554bdac8e5960fd19195dd7d2" exitCode=0 Nov 24 12:22:53 crc kubenswrapper[4678]: I1124 12:22:53.626575 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjxvj" event={"ID":"aa96cd29-5c94-4247-bac0-9ae04bdb3c72","Type":"ContainerStarted","Data":"56384518d2bc498654bbeb3158aa30213ccaeffcdd27792db15c4f64a7ce5c7b"} Nov 24 12:22:53 crc kubenswrapper[4678]: I1124 12:22:53.631406 4678 generic.go:334] "Generic (PLEG): container finished" podID="148de679-89c3-4784-85f7-756b915a91e6" containerID="249a9b1b41f3ef205b3403462b806f785e48faa4df15bf16d6f0cc715ab69352" exitCode=0 Nov 24 12:22:53 crc kubenswrapper[4678]: I1124 12:22:53.631447 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2fm8" event={"ID":"148de679-89c3-4784-85f7-756b915a91e6","Type":"ContainerDied","Data":"249a9b1b41f3ef205b3403462b806f785e48faa4df15bf16d6f0cc715ab69352"} Nov 24 12:22:54 crc kubenswrapper[4678]: I1124 12:22:54.651214 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2fm8" event={"ID":"148de679-89c3-4784-85f7-756b915a91e6","Type":"ContainerStarted","Data":"0a99439a310534f0614ec7e722ad18327a35626c78be08952bb893b378169ba6"} Nov 24 12:22:54 crc kubenswrapper[4678]: I1124 12:22:54.660097 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gx7hl" event={"ID":"c343f8b1-864a-4ac7-81ca-6faab22498bf","Type":"ContainerStarted","Data":"71ffb41dd6b0aa5af8a47b0f14bcbb9d3e5e868da642cdd200d23c9cc18ba7ba"} Nov 24 12:22:54 crc kubenswrapper[4678]: I1124 12:22:54.692615 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m2fm8" podStartSLOduration=3.178747322 podStartE2EDuration="6.692596196s" podCreationTimestamp="2025-11-24 12:22:48 +0000 UTC" firstStartedPulling="2025-11-24 12:22:50.57251833 +0000 UTC m=+3981.503577969" lastFinishedPulling="2025-11-24 12:22:54.086367204 +0000 UTC m=+3985.017426843" observedRunningTime="2025-11-24 12:22:54.677694305 +0000 UTC m=+3985.608753944" watchObservedRunningTime="2025-11-24 12:22:54.692596196 +0000 UTC m=+3985.623655825" Nov 24 12:22:57 crc kubenswrapper[4678]: I1124 12:22:57.691756 4678 generic.go:334] "Generic (PLEG): container finished" podID="c343f8b1-864a-4ac7-81ca-6faab22498bf" containerID="71ffb41dd6b0aa5af8a47b0f14bcbb9d3e5e868da642cdd200d23c9cc18ba7ba" exitCode=0 Nov 24 12:22:57 crc kubenswrapper[4678]: I1124 12:22:57.692371 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gx7hl" event={"ID":"c343f8b1-864a-4ac7-81ca-6faab22498bf","Type":"ContainerDied","Data":"71ffb41dd6b0aa5af8a47b0f14bcbb9d3e5e868da642cdd200d23c9cc18ba7ba"} Nov 24 12:22:57 crc kubenswrapper[4678]: I1124 12:22:57.702165 4678 generic.go:334] "Generic (PLEG): container finished" podID="aa96cd29-5c94-4247-bac0-9ae04bdb3c72" containerID="56384518d2bc498654bbeb3158aa30213ccaeffcdd27792db15c4f64a7ce5c7b" exitCode=0 Nov 24 12:22:57 crc kubenswrapper[4678]: I1124 12:22:57.702217 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjxvj" event={"ID":"aa96cd29-5c94-4247-bac0-9ae04bdb3c72","Type":"ContainerDied","Data":"56384518d2bc498654bbeb3158aa30213ccaeffcdd27792db15c4f64a7ce5c7b"} Nov 24 12:22:58 crc kubenswrapper[4678]: I1124 12:22:58.740370 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gx7hl" event={"ID":"c343f8b1-864a-4ac7-81ca-6faab22498bf","Type":"ContainerStarted","Data":"e7165d6f0f2a3d8091fa976b943cedffe5476060bc9f5f3c8c6a3f5550928163"} Nov 24 12:22:58 crc kubenswrapper[4678]: I1124 12:22:58.765723 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjxvj" event={"ID":"aa96cd29-5c94-4247-bac0-9ae04bdb3c72","Type":"ContainerStarted","Data":"40222eac7f67003824e18ac9055d00da8594cd683e87c0e6bfd13f372d3f529b"} Nov 24 12:22:58 crc kubenswrapper[4678]: I1124 12:22:58.789263 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gx7hl" podStartSLOduration=2.24995988 podStartE2EDuration="7.78923951s" podCreationTimestamp="2025-11-24 12:22:51 +0000 UTC" firstStartedPulling="2025-11-24 12:22:52.610730721 +0000 UTC m=+3983.541790360" lastFinishedPulling="2025-11-24 12:22:58.150010351 +0000 UTC m=+3989.081069990" observedRunningTime="2025-11-24 12:22:58.778128891 +0000 UTC m=+3989.709188540" watchObservedRunningTime="2025-11-24 12:22:58.78923951 +0000 UTC m=+3989.720299149" Nov 24 12:22:58 crc kubenswrapper[4678]: I1124 12:22:58.824529 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sjxvj" podStartSLOduration=3.369012674 podStartE2EDuration="8.82450658s" podCreationTimestamp="2025-11-24 12:22:50 +0000 UTC" firstStartedPulling="2025-11-24 12:22:52.614152213 +0000 UTC m=+3983.545211852" lastFinishedPulling="2025-11-24 12:22:58.069646119 +0000 UTC m=+3989.000705758" observedRunningTime="2025-11-24 12:22:58.808233822 +0000 UTC m=+3989.739293461" watchObservedRunningTime="2025-11-24 12:22:58.82450658 +0000 UTC m=+3989.755566219" Nov 24 12:22:59 crc kubenswrapper[4678]: I1124 12:22:59.011086 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:22:59 crc kubenswrapper[4678]: I1124 12:22:59.011152 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:22:59 crc kubenswrapper[4678]: I1124 12:22:59.072252 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:22:59 crc kubenswrapper[4678]: I1124 12:22:59.831483 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:23:00 crc kubenswrapper[4678]: I1124 12:23:00.297147 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:23:00 crc kubenswrapper[4678]: I1124 12:23:00.297239 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:23:00 crc kubenswrapper[4678]: I1124 12:23:00.826069 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:23:00 crc kubenswrapper[4678]: I1124 12:23:00.826396 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:23:01 crc kubenswrapper[4678]: I1124 12:23:01.457150 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:23:01 crc kubenswrapper[4678]: I1124 12:23:01.458067 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:23:02 crc kubenswrapper[4678]: I1124 12:23:02.041121 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-sjxvj" podUID="aa96cd29-5c94-4247-bac0-9ae04bdb3c72" containerName="registry-server" probeResult="failure" output=< Nov 24 12:23:02 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:23:02 crc kubenswrapper[4678]: > Nov 24 12:23:02 crc kubenswrapper[4678]: I1124 12:23:02.074807 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2fm8"] Nov 24 12:23:02 crc kubenswrapper[4678]: I1124 12:23:02.075026 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m2fm8" podUID="148de679-89c3-4784-85f7-756b915a91e6" containerName="registry-server" containerID="cri-o://0a99439a310534f0614ec7e722ad18327a35626c78be08952bb893b378169ba6" gracePeriod=2 Nov 24 12:23:02 crc kubenswrapper[4678]: I1124 12:23:02.517266 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gx7hl" podUID="c343f8b1-864a-4ac7-81ca-6faab22498bf" containerName="registry-server" probeResult="failure" output=< Nov 24 12:23:02 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:23:02 crc kubenswrapper[4678]: > Nov 24 12:23:02 crc kubenswrapper[4678]: I1124 12:23:02.815998 4678 generic.go:334] "Generic (PLEG): container finished" podID="148de679-89c3-4784-85f7-756b915a91e6" containerID="0a99439a310534f0614ec7e722ad18327a35626c78be08952bb893b378169ba6" exitCode=0 Nov 24 12:23:02 crc kubenswrapper[4678]: I1124 12:23:02.816071 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2fm8" event={"ID":"148de679-89c3-4784-85f7-756b915a91e6","Type":"ContainerDied","Data":"0a99439a310534f0614ec7e722ad18327a35626c78be08952bb893b378169ba6"} Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.224556 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.280896 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fztss\" (UniqueName: \"kubernetes.io/projected/148de679-89c3-4784-85f7-756b915a91e6-kube-api-access-fztss\") pod \"148de679-89c3-4784-85f7-756b915a91e6\" (UID: \"148de679-89c3-4784-85f7-756b915a91e6\") " Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.281126 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/148de679-89c3-4784-85f7-756b915a91e6-catalog-content\") pod \"148de679-89c3-4784-85f7-756b915a91e6\" (UID: \"148de679-89c3-4784-85f7-756b915a91e6\") " Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.281262 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/148de679-89c3-4784-85f7-756b915a91e6-utilities\") pod \"148de679-89c3-4784-85f7-756b915a91e6\" (UID: \"148de679-89c3-4784-85f7-756b915a91e6\") " Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.306801 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/148de679-89c3-4784-85f7-756b915a91e6-utilities" (OuterVolumeSpecName: "utilities") pod "148de679-89c3-4784-85f7-756b915a91e6" (UID: "148de679-89c3-4784-85f7-756b915a91e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.312608 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/148de679-89c3-4784-85f7-756b915a91e6-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.333979 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/148de679-89c3-4784-85f7-756b915a91e6-kube-api-access-fztss" (OuterVolumeSpecName: "kube-api-access-fztss") pod "148de679-89c3-4784-85f7-756b915a91e6" (UID: "148de679-89c3-4784-85f7-756b915a91e6"). InnerVolumeSpecName "kube-api-access-fztss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.386791 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/148de679-89c3-4784-85f7-756b915a91e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "148de679-89c3-4784-85f7-756b915a91e6" (UID: "148de679-89c3-4784-85f7-756b915a91e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.416332 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fztss\" (UniqueName: \"kubernetes.io/projected/148de679-89c3-4784-85f7-756b915a91e6-kube-api-access-fztss\") on node \"crc\" DevicePath \"\"" Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.416378 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/148de679-89c3-4784-85f7-756b915a91e6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.833405 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2fm8" Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.833330 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2fm8" event={"ID":"148de679-89c3-4784-85f7-756b915a91e6","Type":"ContainerDied","Data":"ec9869050dccd7d160e636d0a58bd58c125a55d4af418a1b7e4787f0a85ae2b8"} Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.834587 4678 scope.go:117] "RemoveContainer" containerID="0a99439a310534f0614ec7e722ad18327a35626c78be08952bb893b378169ba6" Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.880249 4678 scope.go:117] "RemoveContainer" containerID="249a9b1b41f3ef205b3403462b806f785e48faa4df15bf16d6f0cc715ab69352" Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.891060 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2fm8"] Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.913473 4678 scope.go:117] "RemoveContainer" containerID="2c25bdee2636c51a6fafecd6a48ebfa5d387e58057b94fb7445d18c95fc2f85e" Nov 24 12:23:03 crc kubenswrapper[4678]: I1124 12:23:03.928817 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2fm8"] Nov 24 12:23:05 crc kubenswrapper[4678]: I1124 12:23:05.910486 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="148de679-89c3-4784-85f7-756b915a91e6" path="/var/lib/kubelet/pods/148de679-89c3-4784-85f7-756b915a91e6/volumes" Nov 24 12:23:10 crc kubenswrapper[4678]: I1124 12:23:10.880427 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:23:10 crc kubenswrapper[4678]: I1124 12:23:10.929162 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:23:11 crc kubenswrapper[4678]: I1124 12:23:11.116883 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sjxvj"] Nov 24 12:23:11 crc kubenswrapper[4678]: I1124 12:23:11.508874 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:23:11 crc kubenswrapper[4678]: I1124 12:23:11.574517 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:23:11 crc kubenswrapper[4678]: I1124 12:23:11.951153 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sjxvj" podUID="aa96cd29-5c94-4247-bac0-9ae04bdb3c72" containerName="registry-server" containerID="cri-o://40222eac7f67003824e18ac9055d00da8594cd683e87c0e6bfd13f372d3f529b" gracePeriod=2 Nov 24 12:23:12 crc kubenswrapper[4678]: I1124 12:23:12.522072 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:23:12 crc kubenswrapper[4678]: I1124 12:23:12.670882 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-catalog-content\") pod \"aa96cd29-5c94-4247-bac0-9ae04bdb3c72\" (UID: \"aa96cd29-5c94-4247-bac0-9ae04bdb3c72\") " Nov 24 12:23:12 crc kubenswrapper[4678]: I1124 12:23:12.671103 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-utilities\") pod \"aa96cd29-5c94-4247-bac0-9ae04bdb3c72\" (UID: \"aa96cd29-5c94-4247-bac0-9ae04bdb3c72\") " Nov 24 12:23:12 crc kubenswrapper[4678]: I1124 12:23:12.671243 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t85bg\" (UniqueName: \"kubernetes.io/projected/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-kube-api-access-t85bg\") pod \"aa96cd29-5c94-4247-bac0-9ae04bdb3c72\" (UID: \"aa96cd29-5c94-4247-bac0-9ae04bdb3c72\") " Nov 24 12:23:12 crc kubenswrapper[4678]: I1124 12:23:12.671987 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-utilities" (OuterVolumeSpecName: "utilities") pod "aa96cd29-5c94-4247-bac0-9ae04bdb3c72" (UID: "aa96cd29-5c94-4247-bac0-9ae04bdb3c72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:23:12 crc kubenswrapper[4678]: I1124 12:23:12.672568 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:23:12 crc kubenswrapper[4678]: I1124 12:23:12.678447 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-kube-api-access-t85bg" (OuterVolumeSpecName: "kube-api-access-t85bg") pod "aa96cd29-5c94-4247-bac0-9ae04bdb3c72" (UID: "aa96cd29-5c94-4247-bac0-9ae04bdb3c72"). InnerVolumeSpecName "kube-api-access-t85bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:23:12 crc kubenswrapper[4678]: I1124 12:23:12.729139 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa96cd29-5c94-4247-bac0-9ae04bdb3c72" (UID: "aa96cd29-5c94-4247-bac0-9ae04bdb3c72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:23:12 crc kubenswrapper[4678]: I1124 12:23:12.781186 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:23:12 crc kubenswrapper[4678]: I1124 12:23:12.781236 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t85bg\" (UniqueName: \"kubernetes.io/projected/aa96cd29-5c94-4247-bac0-9ae04bdb3c72-kube-api-access-t85bg\") on node \"crc\" DevicePath \"\"" Nov 24 12:23:12 crc kubenswrapper[4678]: I1124 12:23:12.998709 4678 generic.go:334] "Generic (PLEG): container finished" podID="aa96cd29-5c94-4247-bac0-9ae04bdb3c72" containerID="40222eac7f67003824e18ac9055d00da8594cd683e87c0e6bfd13f372d3f529b" exitCode=0 Nov 24 12:23:12 crc kubenswrapper[4678]: I1124 12:23:12.999019 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sjxvj" Nov 24 12:23:13 crc kubenswrapper[4678]: I1124 12:23:12.999051 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjxvj" event={"ID":"aa96cd29-5c94-4247-bac0-9ae04bdb3c72","Type":"ContainerDied","Data":"40222eac7f67003824e18ac9055d00da8594cd683e87c0e6bfd13f372d3f529b"} Nov 24 12:23:13 crc kubenswrapper[4678]: I1124 12:23:13.000790 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjxvj" event={"ID":"aa96cd29-5c94-4247-bac0-9ae04bdb3c72","Type":"ContainerDied","Data":"5b1d36ab91a44b4fe595fda94c520140827930f986679b87ccc423417f334e07"} Nov 24 12:23:13 crc kubenswrapper[4678]: I1124 12:23:13.000826 4678 scope.go:117] "RemoveContainer" containerID="40222eac7f67003824e18ac9055d00da8594cd683e87c0e6bfd13f372d3f529b" Nov 24 12:23:13 crc kubenswrapper[4678]: I1124 12:23:13.029704 4678 scope.go:117] "RemoveContainer" containerID="56384518d2bc498654bbeb3158aa30213ccaeffcdd27792db15c4f64a7ce5c7b" Nov 24 12:23:13 crc kubenswrapper[4678]: I1124 12:23:13.042720 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sjxvj"] Nov 24 12:23:13 crc kubenswrapper[4678]: I1124 12:23:13.058440 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sjxvj"] Nov 24 12:23:13 crc kubenswrapper[4678]: I1124 12:23:13.063215 4678 scope.go:117] "RemoveContainer" containerID="9fecba5b8c4605c81cfbb33ecc5f8c3dc04fb8a554bdac8e5960fd19195dd7d2" Nov 24 12:23:13 crc kubenswrapper[4678]: I1124 12:23:13.130230 4678 scope.go:117] "RemoveContainer" containerID="40222eac7f67003824e18ac9055d00da8594cd683e87c0e6bfd13f372d3f529b" Nov 24 12:23:13 crc kubenswrapper[4678]: E1124 12:23:13.130680 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40222eac7f67003824e18ac9055d00da8594cd683e87c0e6bfd13f372d3f529b\": container with ID starting with 40222eac7f67003824e18ac9055d00da8594cd683e87c0e6bfd13f372d3f529b not found: ID does not exist" containerID="40222eac7f67003824e18ac9055d00da8594cd683e87c0e6bfd13f372d3f529b" Nov 24 12:23:13 crc kubenswrapper[4678]: I1124 12:23:13.130733 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40222eac7f67003824e18ac9055d00da8594cd683e87c0e6bfd13f372d3f529b"} err="failed to get container status \"40222eac7f67003824e18ac9055d00da8594cd683e87c0e6bfd13f372d3f529b\": rpc error: code = NotFound desc = could not find container \"40222eac7f67003824e18ac9055d00da8594cd683e87c0e6bfd13f372d3f529b\": container with ID starting with 40222eac7f67003824e18ac9055d00da8594cd683e87c0e6bfd13f372d3f529b not found: ID does not exist" Nov 24 12:23:13 crc kubenswrapper[4678]: I1124 12:23:13.130767 4678 scope.go:117] "RemoveContainer" containerID="56384518d2bc498654bbeb3158aa30213ccaeffcdd27792db15c4f64a7ce5c7b" Nov 24 12:23:13 crc kubenswrapper[4678]: E1124 12:23:13.131111 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56384518d2bc498654bbeb3158aa30213ccaeffcdd27792db15c4f64a7ce5c7b\": container with ID starting with 56384518d2bc498654bbeb3158aa30213ccaeffcdd27792db15c4f64a7ce5c7b not found: ID does not exist" containerID="56384518d2bc498654bbeb3158aa30213ccaeffcdd27792db15c4f64a7ce5c7b" Nov 24 12:23:13 crc kubenswrapper[4678]: I1124 12:23:13.131133 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56384518d2bc498654bbeb3158aa30213ccaeffcdd27792db15c4f64a7ce5c7b"} err="failed to get container status \"56384518d2bc498654bbeb3158aa30213ccaeffcdd27792db15c4f64a7ce5c7b\": rpc error: code = NotFound desc = could not find container \"56384518d2bc498654bbeb3158aa30213ccaeffcdd27792db15c4f64a7ce5c7b\": container with ID starting with 56384518d2bc498654bbeb3158aa30213ccaeffcdd27792db15c4f64a7ce5c7b not found: ID does not exist" Nov 24 12:23:13 crc kubenswrapper[4678]: I1124 12:23:13.131148 4678 scope.go:117] "RemoveContainer" containerID="9fecba5b8c4605c81cfbb33ecc5f8c3dc04fb8a554bdac8e5960fd19195dd7d2" Nov 24 12:23:13 crc kubenswrapper[4678]: E1124 12:23:13.131554 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fecba5b8c4605c81cfbb33ecc5f8c3dc04fb8a554bdac8e5960fd19195dd7d2\": container with ID starting with 9fecba5b8c4605c81cfbb33ecc5f8c3dc04fb8a554bdac8e5960fd19195dd7d2 not found: ID does not exist" containerID="9fecba5b8c4605c81cfbb33ecc5f8c3dc04fb8a554bdac8e5960fd19195dd7d2" Nov 24 12:23:13 crc kubenswrapper[4678]: I1124 12:23:13.131580 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fecba5b8c4605c81cfbb33ecc5f8c3dc04fb8a554bdac8e5960fd19195dd7d2"} err="failed to get container status \"9fecba5b8c4605c81cfbb33ecc5f8c3dc04fb8a554bdac8e5960fd19195dd7d2\": rpc error: code = NotFound desc = could not find container \"9fecba5b8c4605c81cfbb33ecc5f8c3dc04fb8a554bdac8e5960fd19195dd7d2\": container with ID starting with 9fecba5b8c4605c81cfbb33ecc5f8c3dc04fb8a554bdac8e5960fd19195dd7d2 not found: ID does not exist" Nov 24 12:23:13 crc kubenswrapper[4678]: I1124 12:23:13.915139 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa96cd29-5c94-4247-bac0-9ae04bdb3c72" path="/var/lib/kubelet/pods/aa96cd29-5c94-4247-bac0-9ae04bdb3c72/volumes" Nov 24 12:23:13 crc kubenswrapper[4678]: I1124 12:23:13.929351 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gx7hl"] Nov 24 12:23:13 crc kubenswrapper[4678]: I1124 12:23:13.929702 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gx7hl" podUID="c343f8b1-864a-4ac7-81ca-6faab22498bf" containerName="registry-server" containerID="cri-o://e7165d6f0f2a3d8091fa976b943cedffe5476060bc9f5f3c8c6a3f5550928163" gracePeriod=2 Nov 24 12:23:14 crc kubenswrapper[4678]: I1124 12:23:14.509185 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:23:14 crc kubenswrapper[4678]: I1124 12:23:14.635391 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c343f8b1-864a-4ac7-81ca-6faab22498bf-catalog-content\") pod \"c343f8b1-864a-4ac7-81ca-6faab22498bf\" (UID: \"c343f8b1-864a-4ac7-81ca-6faab22498bf\") " Nov 24 12:23:14 crc kubenswrapper[4678]: I1124 12:23:14.635478 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c343f8b1-864a-4ac7-81ca-6faab22498bf-utilities\") pod \"c343f8b1-864a-4ac7-81ca-6faab22498bf\" (UID: \"c343f8b1-864a-4ac7-81ca-6faab22498bf\") " Nov 24 12:23:14 crc kubenswrapper[4678]: I1124 12:23:14.635522 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvsbs\" (UniqueName: \"kubernetes.io/projected/c343f8b1-864a-4ac7-81ca-6faab22498bf-kube-api-access-wvsbs\") pod \"c343f8b1-864a-4ac7-81ca-6faab22498bf\" (UID: \"c343f8b1-864a-4ac7-81ca-6faab22498bf\") " Nov 24 12:23:14 crc kubenswrapper[4678]: I1124 12:23:14.636228 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c343f8b1-864a-4ac7-81ca-6faab22498bf-utilities" (OuterVolumeSpecName: "utilities") pod "c343f8b1-864a-4ac7-81ca-6faab22498bf" (UID: "c343f8b1-864a-4ac7-81ca-6faab22498bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:23:14 crc kubenswrapper[4678]: I1124 12:23:14.643784 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c343f8b1-864a-4ac7-81ca-6faab22498bf-kube-api-access-wvsbs" (OuterVolumeSpecName: "kube-api-access-wvsbs") pod "c343f8b1-864a-4ac7-81ca-6faab22498bf" (UID: "c343f8b1-864a-4ac7-81ca-6faab22498bf"). InnerVolumeSpecName "kube-api-access-wvsbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:23:14 crc kubenswrapper[4678]: I1124 12:23:14.685579 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c343f8b1-864a-4ac7-81ca-6faab22498bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c343f8b1-864a-4ac7-81ca-6faab22498bf" (UID: "c343f8b1-864a-4ac7-81ca-6faab22498bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:23:14 crc kubenswrapper[4678]: I1124 12:23:14.740429 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c343f8b1-864a-4ac7-81ca-6faab22498bf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:23:14 crc kubenswrapper[4678]: I1124 12:23:14.740479 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c343f8b1-864a-4ac7-81ca-6faab22498bf-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:23:14 crc kubenswrapper[4678]: I1124 12:23:14.740492 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvsbs\" (UniqueName: \"kubernetes.io/projected/c343f8b1-864a-4ac7-81ca-6faab22498bf-kube-api-access-wvsbs\") on node \"crc\" DevicePath \"\"" Nov 24 12:23:15 crc kubenswrapper[4678]: I1124 12:23:15.041900 4678 generic.go:334] "Generic (PLEG): container finished" podID="c343f8b1-864a-4ac7-81ca-6faab22498bf" containerID="e7165d6f0f2a3d8091fa976b943cedffe5476060bc9f5f3c8c6a3f5550928163" exitCode=0 Nov 24 12:23:15 crc kubenswrapper[4678]: I1124 12:23:15.041958 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gx7hl" event={"ID":"c343f8b1-864a-4ac7-81ca-6faab22498bf","Type":"ContainerDied","Data":"e7165d6f0f2a3d8091fa976b943cedffe5476060bc9f5f3c8c6a3f5550928163"} Nov 24 12:23:15 crc kubenswrapper[4678]: I1124 12:23:15.042367 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gx7hl" event={"ID":"c343f8b1-864a-4ac7-81ca-6faab22498bf","Type":"ContainerDied","Data":"3a15f8cced9110d22205504fedc885cd5732f0695944c9e2abedd6d4c2ae74e1"} Nov 24 12:23:15 crc kubenswrapper[4678]: I1124 12:23:15.042427 4678 scope.go:117] "RemoveContainer" containerID="e7165d6f0f2a3d8091fa976b943cedffe5476060bc9f5f3c8c6a3f5550928163" Nov 24 12:23:15 crc kubenswrapper[4678]: I1124 12:23:15.041992 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gx7hl" Nov 24 12:23:15 crc kubenswrapper[4678]: I1124 12:23:15.068751 4678 scope.go:117] "RemoveContainer" containerID="71ffb41dd6b0aa5af8a47b0f14bcbb9d3e5e868da642cdd200d23c9cc18ba7ba" Nov 24 12:23:15 crc kubenswrapper[4678]: I1124 12:23:15.092061 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gx7hl"] Nov 24 12:23:15 crc kubenswrapper[4678]: I1124 12:23:15.106395 4678 scope.go:117] "RemoveContainer" containerID="79484bb061dcbd4934d4b37203415511052d409ba4ea8478ef5c3b5413613d7a" Nov 24 12:23:15 crc kubenswrapper[4678]: I1124 12:23:15.109930 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gx7hl"] Nov 24 12:23:15 crc kubenswrapper[4678]: I1124 12:23:15.151130 4678 scope.go:117] "RemoveContainer" containerID="e7165d6f0f2a3d8091fa976b943cedffe5476060bc9f5f3c8c6a3f5550928163" Nov 24 12:23:15 crc kubenswrapper[4678]: E1124 12:23:15.152105 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7165d6f0f2a3d8091fa976b943cedffe5476060bc9f5f3c8c6a3f5550928163\": container with ID starting with e7165d6f0f2a3d8091fa976b943cedffe5476060bc9f5f3c8c6a3f5550928163 not found: ID does not exist" containerID="e7165d6f0f2a3d8091fa976b943cedffe5476060bc9f5f3c8c6a3f5550928163" Nov 24 12:23:15 crc kubenswrapper[4678]: I1124 12:23:15.152164 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7165d6f0f2a3d8091fa976b943cedffe5476060bc9f5f3c8c6a3f5550928163"} err="failed to get container status \"e7165d6f0f2a3d8091fa976b943cedffe5476060bc9f5f3c8c6a3f5550928163\": rpc error: code = NotFound desc = could not find container \"e7165d6f0f2a3d8091fa976b943cedffe5476060bc9f5f3c8c6a3f5550928163\": container with ID starting with e7165d6f0f2a3d8091fa976b943cedffe5476060bc9f5f3c8c6a3f5550928163 not found: ID does not exist" Nov 24 12:23:15 crc kubenswrapper[4678]: I1124 12:23:15.152211 4678 scope.go:117] "RemoveContainer" containerID="71ffb41dd6b0aa5af8a47b0f14bcbb9d3e5e868da642cdd200d23c9cc18ba7ba" Nov 24 12:23:15 crc kubenswrapper[4678]: E1124 12:23:15.153043 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71ffb41dd6b0aa5af8a47b0f14bcbb9d3e5e868da642cdd200d23c9cc18ba7ba\": container with ID starting with 71ffb41dd6b0aa5af8a47b0f14bcbb9d3e5e868da642cdd200d23c9cc18ba7ba not found: ID does not exist" containerID="71ffb41dd6b0aa5af8a47b0f14bcbb9d3e5e868da642cdd200d23c9cc18ba7ba" Nov 24 12:23:15 crc kubenswrapper[4678]: I1124 12:23:15.153243 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71ffb41dd6b0aa5af8a47b0f14bcbb9d3e5e868da642cdd200d23c9cc18ba7ba"} err="failed to get container status \"71ffb41dd6b0aa5af8a47b0f14bcbb9d3e5e868da642cdd200d23c9cc18ba7ba\": rpc error: code = NotFound desc = could not find container \"71ffb41dd6b0aa5af8a47b0f14bcbb9d3e5e868da642cdd200d23c9cc18ba7ba\": container with ID starting with 71ffb41dd6b0aa5af8a47b0f14bcbb9d3e5e868da642cdd200d23c9cc18ba7ba not found: ID does not exist" Nov 24 12:23:15 crc kubenswrapper[4678]: I1124 12:23:15.153399 4678 scope.go:117] "RemoveContainer" containerID="79484bb061dcbd4934d4b37203415511052d409ba4ea8478ef5c3b5413613d7a" Nov 24 12:23:15 crc kubenswrapper[4678]: E1124 12:23:15.154379 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79484bb061dcbd4934d4b37203415511052d409ba4ea8478ef5c3b5413613d7a\": container with ID starting with 79484bb061dcbd4934d4b37203415511052d409ba4ea8478ef5c3b5413613d7a not found: ID does not exist" containerID="79484bb061dcbd4934d4b37203415511052d409ba4ea8478ef5c3b5413613d7a" Nov 24 12:23:15 crc kubenswrapper[4678]: I1124 12:23:15.154846 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79484bb061dcbd4934d4b37203415511052d409ba4ea8478ef5c3b5413613d7a"} err="failed to get container status \"79484bb061dcbd4934d4b37203415511052d409ba4ea8478ef5c3b5413613d7a\": rpc error: code = NotFound desc = could not find container \"79484bb061dcbd4934d4b37203415511052d409ba4ea8478ef5c3b5413613d7a\": container with ID starting with 79484bb061dcbd4934d4b37203415511052d409ba4ea8478ef5c3b5413613d7a not found: ID does not exist" Nov 24 12:23:15 crc kubenswrapper[4678]: I1124 12:23:15.922850 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c343f8b1-864a-4ac7-81ca-6faab22498bf" path="/var/lib/kubelet/pods/c343f8b1-864a-4ac7-81ca-6faab22498bf/volumes" Nov 24 12:23:30 crc kubenswrapper[4678]: I1124 12:23:30.297398 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:23:30 crc kubenswrapper[4678]: I1124 12:23:30.298066 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:24:00 crc kubenswrapper[4678]: I1124 12:24:00.297125 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:24:00 crc kubenswrapper[4678]: I1124 12:24:00.297614 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:24:00 crc kubenswrapper[4678]: I1124 12:24:00.297660 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 12:24:00 crc kubenswrapper[4678]: I1124 12:24:00.298594 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16489c56369f7c537f307c201030a2a1cdbb657958c81b32bb3a0e8ddbf7ba5b"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:24:00 crc kubenswrapper[4678]: I1124 12:24:00.298657 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://16489c56369f7c537f307c201030a2a1cdbb657958c81b32bb3a0e8ddbf7ba5b" gracePeriod=600 Nov 24 12:24:00 crc kubenswrapper[4678]: I1124 12:24:00.621121 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="16489c56369f7c537f307c201030a2a1cdbb657958c81b32bb3a0e8ddbf7ba5b" exitCode=0 Nov 24 12:24:00 crc kubenswrapper[4678]: I1124 12:24:00.621288 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"16489c56369f7c537f307c201030a2a1cdbb657958c81b32bb3a0e8ddbf7ba5b"} Nov 24 12:24:00 crc kubenswrapper[4678]: I1124 12:24:00.621474 4678 scope.go:117] "RemoveContainer" containerID="2d08937db89ce7e8ed73181d92770aa90eb7c481cb1f551212945b32468d3614" Nov 24 12:24:01 crc kubenswrapper[4678]: I1124 12:24:01.638690 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf"} Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.135158 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-59rlw"] Nov 24 12:25:04 crc kubenswrapper[4678]: E1124 12:25:04.136520 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="148de679-89c3-4784-85f7-756b915a91e6" containerName="extract-content" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.136540 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="148de679-89c3-4784-85f7-756b915a91e6" containerName="extract-content" Nov 24 12:25:04 crc kubenswrapper[4678]: E1124 12:25:04.136593 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="148de679-89c3-4784-85f7-756b915a91e6" containerName="extract-utilities" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.136603 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="148de679-89c3-4784-85f7-756b915a91e6" containerName="extract-utilities" Nov 24 12:25:04 crc kubenswrapper[4678]: E1124 12:25:04.136624 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c343f8b1-864a-4ac7-81ca-6faab22498bf" containerName="registry-server" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.136632 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c343f8b1-864a-4ac7-81ca-6faab22498bf" containerName="registry-server" Nov 24 12:25:04 crc kubenswrapper[4678]: E1124 12:25:04.136645 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa96cd29-5c94-4247-bac0-9ae04bdb3c72" containerName="extract-content" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.136653 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa96cd29-5c94-4247-bac0-9ae04bdb3c72" containerName="extract-content" Nov 24 12:25:04 crc kubenswrapper[4678]: E1124 12:25:04.136700 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="148de679-89c3-4784-85f7-756b915a91e6" containerName="registry-server" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.136709 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="148de679-89c3-4784-85f7-756b915a91e6" containerName="registry-server" Nov 24 12:25:04 crc kubenswrapper[4678]: E1124 12:25:04.136724 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa96cd29-5c94-4247-bac0-9ae04bdb3c72" containerName="extract-utilities" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.136731 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa96cd29-5c94-4247-bac0-9ae04bdb3c72" containerName="extract-utilities" Nov 24 12:25:04 crc kubenswrapper[4678]: E1124 12:25:04.136743 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c343f8b1-864a-4ac7-81ca-6faab22498bf" containerName="extract-content" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.136750 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c343f8b1-864a-4ac7-81ca-6faab22498bf" containerName="extract-content" Nov 24 12:25:04 crc kubenswrapper[4678]: E1124 12:25:04.136767 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa96cd29-5c94-4247-bac0-9ae04bdb3c72" containerName="registry-server" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.136800 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa96cd29-5c94-4247-bac0-9ae04bdb3c72" containerName="registry-server" Nov 24 12:25:04 crc kubenswrapper[4678]: E1124 12:25:04.136815 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c343f8b1-864a-4ac7-81ca-6faab22498bf" containerName="extract-utilities" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.136823 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c343f8b1-864a-4ac7-81ca-6faab22498bf" containerName="extract-utilities" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.137127 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="148de679-89c3-4784-85f7-756b915a91e6" containerName="registry-server" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.137177 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="c343f8b1-864a-4ac7-81ca-6faab22498bf" containerName="registry-server" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.137193 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa96cd29-5c94-4247-bac0-9ae04bdb3c72" containerName="registry-server" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.139623 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.189641 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-59rlw"] Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.215059 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0926af3d-fa6c-4b6e-b6ba-74912b6da441-utilities\") pod \"redhat-operators-59rlw\" (UID: \"0926af3d-fa6c-4b6e-b6ba-74912b6da441\") " pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.215354 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0926af3d-fa6c-4b6e-b6ba-74912b6da441-catalog-content\") pod \"redhat-operators-59rlw\" (UID: \"0926af3d-fa6c-4b6e-b6ba-74912b6da441\") " pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.215456 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlmtc\" (UniqueName: \"kubernetes.io/projected/0926af3d-fa6c-4b6e-b6ba-74912b6da441-kube-api-access-zlmtc\") pod \"redhat-operators-59rlw\" (UID: \"0926af3d-fa6c-4b6e-b6ba-74912b6da441\") " pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.317275 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0926af3d-fa6c-4b6e-b6ba-74912b6da441-catalog-content\") pod \"redhat-operators-59rlw\" (UID: \"0926af3d-fa6c-4b6e-b6ba-74912b6da441\") " pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.317388 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlmtc\" (UniqueName: \"kubernetes.io/projected/0926af3d-fa6c-4b6e-b6ba-74912b6da441-kube-api-access-zlmtc\") pod \"redhat-operators-59rlw\" (UID: \"0926af3d-fa6c-4b6e-b6ba-74912b6da441\") " pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.317647 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0926af3d-fa6c-4b6e-b6ba-74912b6da441-utilities\") pod \"redhat-operators-59rlw\" (UID: \"0926af3d-fa6c-4b6e-b6ba-74912b6da441\") " pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.317791 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0926af3d-fa6c-4b6e-b6ba-74912b6da441-catalog-content\") pod \"redhat-operators-59rlw\" (UID: \"0926af3d-fa6c-4b6e-b6ba-74912b6da441\") " pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.318145 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0926af3d-fa6c-4b6e-b6ba-74912b6da441-utilities\") pod \"redhat-operators-59rlw\" (UID: \"0926af3d-fa6c-4b6e-b6ba-74912b6da441\") " pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.339528 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlmtc\" (UniqueName: \"kubernetes.io/projected/0926af3d-fa6c-4b6e-b6ba-74912b6da441-kube-api-access-zlmtc\") pod \"redhat-operators-59rlw\" (UID: \"0926af3d-fa6c-4b6e-b6ba-74912b6da441\") " pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:04 crc kubenswrapper[4678]: I1124 12:25:04.508200 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:05 crc kubenswrapper[4678]: I1124 12:25:05.037290 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-59rlw"] Nov 24 12:25:05 crc kubenswrapper[4678]: I1124 12:25:05.459097 4678 generic.go:334] "Generic (PLEG): container finished" podID="0926af3d-fa6c-4b6e-b6ba-74912b6da441" containerID="8ffca57c3edb8b0d67c6354a69a3799ea293fef7ee82b2413c095c7bbbd12a23" exitCode=0 Nov 24 12:25:05 crc kubenswrapper[4678]: I1124 12:25:05.459212 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-59rlw" event={"ID":"0926af3d-fa6c-4b6e-b6ba-74912b6da441","Type":"ContainerDied","Data":"8ffca57c3edb8b0d67c6354a69a3799ea293fef7ee82b2413c095c7bbbd12a23"} Nov 24 12:25:05 crc kubenswrapper[4678]: I1124 12:25:05.459699 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-59rlw" event={"ID":"0926af3d-fa6c-4b6e-b6ba-74912b6da441","Type":"ContainerStarted","Data":"c90f2e1af700f6fc8460cd3e046551eddfb1d12431b499140ad469c042d90c75"} Nov 24 12:25:06 crc kubenswrapper[4678]: I1124 12:25:06.475278 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-59rlw" event={"ID":"0926af3d-fa6c-4b6e-b6ba-74912b6da441","Type":"ContainerStarted","Data":"8c1f3a3b733c391670c1664b1719da0b415ec5717d1d38982eb902d580056e5c"} Nov 24 12:25:12 crc kubenswrapper[4678]: I1124 12:25:12.575431 4678 generic.go:334] "Generic (PLEG): container finished" podID="0926af3d-fa6c-4b6e-b6ba-74912b6da441" containerID="8c1f3a3b733c391670c1664b1719da0b415ec5717d1d38982eb902d580056e5c" exitCode=0 Nov 24 12:25:12 crc kubenswrapper[4678]: I1124 12:25:12.575512 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-59rlw" event={"ID":"0926af3d-fa6c-4b6e-b6ba-74912b6da441","Type":"ContainerDied","Data":"8c1f3a3b733c391670c1664b1719da0b415ec5717d1d38982eb902d580056e5c"} Nov 24 12:25:13 crc kubenswrapper[4678]: I1124 12:25:13.592619 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-59rlw" event={"ID":"0926af3d-fa6c-4b6e-b6ba-74912b6da441","Type":"ContainerStarted","Data":"542e12b9d8a9d044317a920c567ce444858fbebc130b8f8c6aeeddd473c57ca8"} Nov 24 12:25:13 crc kubenswrapper[4678]: I1124 12:25:13.625200 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-59rlw" podStartSLOduration=2.007646741 podStartE2EDuration="9.625174772s" podCreationTimestamp="2025-11-24 12:25:04 +0000 UTC" firstStartedPulling="2025-11-24 12:25:05.461244602 +0000 UTC m=+4116.392304241" lastFinishedPulling="2025-11-24 12:25:13.078772633 +0000 UTC m=+4124.009832272" observedRunningTime="2025-11-24 12:25:13.616327395 +0000 UTC m=+4124.547387054" watchObservedRunningTime="2025-11-24 12:25:13.625174772 +0000 UTC m=+4124.556234411" Nov 24 12:25:14 crc kubenswrapper[4678]: I1124 12:25:14.508756 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:14 crc kubenswrapper[4678]: I1124 12:25:14.509375 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:15 crc kubenswrapper[4678]: I1124 12:25:15.564149 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-59rlw" podUID="0926af3d-fa6c-4b6e-b6ba-74912b6da441" containerName="registry-server" probeResult="failure" output=< Nov 24 12:25:15 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:25:15 crc kubenswrapper[4678]: > Nov 24 12:25:25 crc kubenswrapper[4678]: I1124 12:25:25.565528 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-59rlw" podUID="0926af3d-fa6c-4b6e-b6ba-74912b6da441" containerName="registry-server" probeResult="failure" output=< Nov 24 12:25:25 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:25:25 crc kubenswrapper[4678]: > Nov 24 12:25:34 crc kubenswrapper[4678]: I1124 12:25:34.849115 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:34 crc kubenswrapper[4678]: I1124 12:25:34.921657 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:35 crc kubenswrapper[4678]: I1124 12:25:35.327568 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-59rlw"] Nov 24 12:25:36 crc kubenswrapper[4678]: I1124 12:25:36.884714 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-59rlw" podUID="0926af3d-fa6c-4b6e-b6ba-74912b6da441" containerName="registry-server" containerID="cri-o://542e12b9d8a9d044317a920c567ce444858fbebc130b8f8c6aeeddd473c57ca8" gracePeriod=2 Nov 24 12:25:37 crc kubenswrapper[4678]: I1124 12:25:37.900597 4678 generic.go:334] "Generic (PLEG): container finished" podID="0926af3d-fa6c-4b6e-b6ba-74912b6da441" containerID="542e12b9d8a9d044317a920c567ce444858fbebc130b8f8c6aeeddd473c57ca8" exitCode=0 Nov 24 12:25:37 crc kubenswrapper[4678]: I1124 12:25:37.910937 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-59rlw" event={"ID":"0926af3d-fa6c-4b6e-b6ba-74912b6da441","Type":"ContainerDied","Data":"542e12b9d8a9d044317a920c567ce444858fbebc130b8f8c6aeeddd473c57ca8"} Nov 24 12:25:38 crc kubenswrapper[4678]: E1124 12:25:38.424052 4678 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.214:53132->38.102.83.214:39261: write tcp 38.102.83.214:53132->38.102.83.214:39261: write: broken pipe Nov 24 12:25:38 crc kubenswrapper[4678]: I1124 12:25:38.611052 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:38 crc kubenswrapper[4678]: I1124 12:25:38.799955 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlmtc\" (UniqueName: \"kubernetes.io/projected/0926af3d-fa6c-4b6e-b6ba-74912b6da441-kube-api-access-zlmtc\") pod \"0926af3d-fa6c-4b6e-b6ba-74912b6da441\" (UID: \"0926af3d-fa6c-4b6e-b6ba-74912b6da441\") " Nov 24 12:25:38 crc kubenswrapper[4678]: I1124 12:25:38.800078 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0926af3d-fa6c-4b6e-b6ba-74912b6da441-catalog-content\") pod \"0926af3d-fa6c-4b6e-b6ba-74912b6da441\" (UID: \"0926af3d-fa6c-4b6e-b6ba-74912b6da441\") " Nov 24 12:25:38 crc kubenswrapper[4678]: I1124 12:25:38.800522 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0926af3d-fa6c-4b6e-b6ba-74912b6da441-utilities\") pod \"0926af3d-fa6c-4b6e-b6ba-74912b6da441\" (UID: \"0926af3d-fa6c-4b6e-b6ba-74912b6da441\") " Nov 24 12:25:38 crc kubenswrapper[4678]: I1124 12:25:38.802720 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0926af3d-fa6c-4b6e-b6ba-74912b6da441-utilities" (OuterVolumeSpecName: "utilities") pod "0926af3d-fa6c-4b6e-b6ba-74912b6da441" (UID: "0926af3d-fa6c-4b6e-b6ba-74912b6da441"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:25:38 crc kubenswrapper[4678]: I1124 12:25:38.810653 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0926af3d-fa6c-4b6e-b6ba-74912b6da441-kube-api-access-zlmtc" (OuterVolumeSpecName: "kube-api-access-zlmtc") pod "0926af3d-fa6c-4b6e-b6ba-74912b6da441" (UID: "0926af3d-fa6c-4b6e-b6ba-74912b6da441"). InnerVolumeSpecName "kube-api-access-zlmtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:25:38 crc kubenswrapper[4678]: I1124 12:25:38.911168 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0926af3d-fa6c-4b6e-b6ba-74912b6da441-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:25:38 crc kubenswrapper[4678]: I1124 12:25:38.911211 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlmtc\" (UniqueName: \"kubernetes.io/projected/0926af3d-fa6c-4b6e-b6ba-74912b6da441-kube-api-access-zlmtc\") on node \"crc\" DevicePath \"\"" Nov 24 12:25:38 crc kubenswrapper[4678]: I1124 12:25:38.918085 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-59rlw" event={"ID":"0926af3d-fa6c-4b6e-b6ba-74912b6da441","Type":"ContainerDied","Data":"c90f2e1af700f6fc8460cd3e046551eddfb1d12431b499140ad469c042d90c75"} Nov 24 12:25:38 crc kubenswrapper[4678]: I1124 12:25:38.918474 4678 scope.go:117] "RemoveContainer" containerID="542e12b9d8a9d044317a920c567ce444858fbebc130b8f8c6aeeddd473c57ca8" Nov 24 12:25:38 crc kubenswrapper[4678]: I1124 12:25:38.918352 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-59rlw" Nov 24 12:25:38 crc kubenswrapper[4678]: I1124 12:25:38.948772 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0926af3d-fa6c-4b6e-b6ba-74912b6da441-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0926af3d-fa6c-4b6e-b6ba-74912b6da441" (UID: "0926af3d-fa6c-4b6e-b6ba-74912b6da441"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:25:38 crc kubenswrapper[4678]: I1124 12:25:38.953361 4678 scope.go:117] "RemoveContainer" containerID="8c1f3a3b733c391670c1664b1719da0b415ec5717d1d38982eb902d580056e5c" Nov 24 12:25:38 crc kubenswrapper[4678]: I1124 12:25:38.981016 4678 scope.go:117] "RemoveContainer" containerID="8ffca57c3edb8b0d67c6354a69a3799ea293fef7ee82b2413c095c7bbbd12a23" Nov 24 12:25:39 crc kubenswrapper[4678]: I1124 12:25:39.015329 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0926af3d-fa6c-4b6e-b6ba-74912b6da441-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:25:39 crc kubenswrapper[4678]: I1124 12:25:39.269096 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-59rlw"] Nov 24 12:25:39 crc kubenswrapper[4678]: I1124 12:25:39.296840 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-59rlw"] Nov 24 12:25:39 crc kubenswrapper[4678]: I1124 12:25:39.911542 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0926af3d-fa6c-4b6e-b6ba-74912b6da441" path="/var/lib/kubelet/pods/0926af3d-fa6c-4b6e-b6ba-74912b6da441/volumes" Nov 24 12:26:00 crc kubenswrapper[4678]: I1124 12:26:00.297263 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:26:00 crc kubenswrapper[4678]: I1124 12:26:00.298074 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:26:30 crc kubenswrapper[4678]: I1124 12:26:30.296924 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:26:30 crc kubenswrapper[4678]: I1124 12:26:30.297959 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:27:00 crc kubenswrapper[4678]: I1124 12:27:00.298518 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:27:00 crc kubenswrapper[4678]: I1124 12:27:00.299530 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:27:00 crc kubenswrapper[4678]: I1124 12:27:00.299608 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 12:27:00 crc kubenswrapper[4678]: I1124 12:27:00.301166 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:27:00 crc kubenswrapper[4678]: I1124 12:27:00.301243 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" gracePeriod=600 Nov 24 12:27:00 crc kubenswrapper[4678]: E1124 12:27:00.426487 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:27:00 crc kubenswrapper[4678]: I1124 12:27:00.953518 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" exitCode=0 Nov 24 12:27:00 crc kubenswrapper[4678]: I1124 12:27:00.953608 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf"} Nov 24 12:27:00 crc kubenswrapper[4678]: I1124 12:27:00.954147 4678 scope.go:117] "RemoveContainer" containerID="16489c56369f7c537f307c201030a2a1cdbb657958c81b32bb3a0e8ddbf7ba5b" Nov 24 12:27:00 crc kubenswrapper[4678]: I1124 12:27:00.955165 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:27:00 crc kubenswrapper[4678]: E1124 12:27:00.955588 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:27:15 crc kubenswrapper[4678]: I1124 12:27:15.897057 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:27:15 crc kubenswrapper[4678]: E1124 12:27:15.898102 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:27:29 crc kubenswrapper[4678]: I1124 12:27:29.906041 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:27:29 crc kubenswrapper[4678]: E1124 12:27:29.906918 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:27:42 crc kubenswrapper[4678]: I1124 12:27:42.896413 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:27:42 crc kubenswrapper[4678]: E1124 12:27:42.897214 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:27:56 crc kubenswrapper[4678]: I1124 12:27:56.897797 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:27:56 crc kubenswrapper[4678]: E1124 12:27:56.898726 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:28:11 crc kubenswrapper[4678]: I1124 12:28:11.896243 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:28:11 crc kubenswrapper[4678]: E1124 12:28:11.897617 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:28:23 crc kubenswrapper[4678]: I1124 12:28:23.897185 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:28:23 crc kubenswrapper[4678]: E1124 12:28:23.898595 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:28:36 crc kubenswrapper[4678]: I1124 12:28:36.896622 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:28:36 crc kubenswrapper[4678]: E1124 12:28:36.897883 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:28:50 crc kubenswrapper[4678]: I1124 12:28:50.897109 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:28:50 crc kubenswrapper[4678]: E1124 12:28:50.899061 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:29:04 crc kubenswrapper[4678]: I1124 12:29:04.896765 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:29:04 crc kubenswrapper[4678]: E1124 12:29:04.897899 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:29:15 crc kubenswrapper[4678]: I1124 12:29:15.897180 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:29:15 crc kubenswrapper[4678]: E1124 12:29:15.898821 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:29:26 crc kubenswrapper[4678]: I1124 12:29:26.895318 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:29:26 crc kubenswrapper[4678]: E1124 12:29:26.896099 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:29:41 crc kubenswrapper[4678]: I1124 12:29:41.896170 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:29:41 crc kubenswrapper[4678]: E1124 12:29:41.897097 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:29:56 crc kubenswrapper[4678]: I1124 12:29:56.896317 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:29:56 crc kubenswrapper[4678]: E1124 12:29:56.897170 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.153168 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x"] Nov 24 12:30:00 crc kubenswrapper[4678]: E1124 12:30:00.154188 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0926af3d-fa6c-4b6e-b6ba-74912b6da441" containerName="registry-server" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.154205 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="0926af3d-fa6c-4b6e-b6ba-74912b6da441" containerName="registry-server" Nov 24 12:30:00 crc kubenswrapper[4678]: E1124 12:30:00.154222 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0926af3d-fa6c-4b6e-b6ba-74912b6da441" containerName="extract-content" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.154229 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="0926af3d-fa6c-4b6e-b6ba-74912b6da441" containerName="extract-content" Nov 24 12:30:00 crc kubenswrapper[4678]: E1124 12:30:00.154249 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0926af3d-fa6c-4b6e-b6ba-74912b6da441" containerName="extract-utilities" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.154255 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="0926af3d-fa6c-4b6e-b6ba-74912b6da441" containerName="extract-utilities" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.154478 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="0926af3d-fa6c-4b6e-b6ba-74912b6da441" containerName="registry-server" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.155293 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.159436 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.160774 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.180566 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x"] Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.269464 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-config-volume\") pod \"collect-profiles-29399790-8gc9x\" (UID: \"5444d6db-2b72-4ef3-8dc5-da0f2540e49d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.270012 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-secret-volume\") pod \"collect-profiles-29399790-8gc9x\" (UID: \"5444d6db-2b72-4ef3-8dc5-da0f2540e49d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.270066 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm48f\" (UniqueName: \"kubernetes.io/projected/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-kube-api-access-qm48f\") pod \"collect-profiles-29399790-8gc9x\" (UID: \"5444d6db-2b72-4ef3-8dc5-da0f2540e49d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.374659 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-config-volume\") pod \"collect-profiles-29399790-8gc9x\" (UID: \"5444d6db-2b72-4ef3-8dc5-da0f2540e49d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.374842 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-secret-volume\") pod \"collect-profiles-29399790-8gc9x\" (UID: \"5444d6db-2b72-4ef3-8dc5-da0f2540e49d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.374888 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm48f\" (UniqueName: \"kubernetes.io/projected/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-kube-api-access-qm48f\") pod \"collect-profiles-29399790-8gc9x\" (UID: \"5444d6db-2b72-4ef3-8dc5-da0f2540e49d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.376069 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-config-volume\") pod \"collect-profiles-29399790-8gc9x\" (UID: \"5444d6db-2b72-4ef3-8dc5-da0f2540e49d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.388183 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-secret-volume\") pod \"collect-profiles-29399790-8gc9x\" (UID: \"5444d6db-2b72-4ef3-8dc5-da0f2540e49d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.396330 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm48f\" (UniqueName: \"kubernetes.io/projected/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-kube-api-access-qm48f\") pod \"collect-profiles-29399790-8gc9x\" (UID: \"5444d6db-2b72-4ef3-8dc5-da0f2540e49d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" Nov 24 12:30:00 crc kubenswrapper[4678]: I1124 12:30:00.480797 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" Nov 24 12:30:01 crc kubenswrapper[4678]: I1124 12:30:01.011124 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x"] Nov 24 12:30:01 crc kubenswrapper[4678]: I1124 12:30:01.247840 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" event={"ID":"5444d6db-2b72-4ef3-8dc5-da0f2540e49d","Type":"ContainerStarted","Data":"a38580ae46064596082351abaa10e2a8d6e2b8fb6b8481cba785c75f39814744"} Nov 24 12:30:01 crc kubenswrapper[4678]: I1124 12:30:01.248362 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" event={"ID":"5444d6db-2b72-4ef3-8dc5-da0f2540e49d","Type":"ContainerStarted","Data":"85a170db6880678dafca855a84c7225e6027d7ba38f260a411cc18e3fe95f1f2"} Nov 24 12:30:01 crc kubenswrapper[4678]: I1124 12:30:01.277887 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" podStartSLOduration=1.277863078 podStartE2EDuration="1.277863078s" podCreationTimestamp="2025-11-24 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:30:01.264204711 +0000 UTC m=+4412.195264360" watchObservedRunningTime="2025-11-24 12:30:01.277863078 +0000 UTC m=+4412.208922717" Nov 24 12:30:02 crc kubenswrapper[4678]: I1124 12:30:02.262144 4678 generic.go:334] "Generic (PLEG): container finished" podID="5444d6db-2b72-4ef3-8dc5-da0f2540e49d" containerID="a38580ae46064596082351abaa10e2a8d6e2b8fb6b8481cba785c75f39814744" exitCode=0 Nov 24 12:30:02 crc kubenswrapper[4678]: I1124 12:30:02.262236 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" event={"ID":"5444d6db-2b72-4ef3-8dc5-da0f2540e49d","Type":"ContainerDied","Data":"a38580ae46064596082351abaa10e2a8d6e2b8fb6b8481cba785c75f39814744"} Nov 24 12:30:03 crc kubenswrapper[4678]: I1124 12:30:03.804540 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" Nov 24 12:30:03 crc kubenswrapper[4678]: I1124 12:30:03.870562 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm48f\" (UniqueName: \"kubernetes.io/projected/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-kube-api-access-qm48f\") pod \"5444d6db-2b72-4ef3-8dc5-da0f2540e49d\" (UID: \"5444d6db-2b72-4ef3-8dc5-da0f2540e49d\") " Nov 24 12:30:03 crc kubenswrapper[4678]: I1124 12:30:03.870716 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-secret-volume\") pod \"5444d6db-2b72-4ef3-8dc5-da0f2540e49d\" (UID: \"5444d6db-2b72-4ef3-8dc5-da0f2540e49d\") " Nov 24 12:30:03 crc kubenswrapper[4678]: I1124 12:30:03.870836 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-config-volume\") pod \"5444d6db-2b72-4ef3-8dc5-da0f2540e49d\" (UID: \"5444d6db-2b72-4ef3-8dc5-da0f2540e49d\") " Nov 24 12:30:03 crc kubenswrapper[4678]: I1124 12:30:03.872383 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-config-volume" (OuterVolumeSpecName: "config-volume") pod "5444d6db-2b72-4ef3-8dc5-da0f2540e49d" (UID: "5444d6db-2b72-4ef3-8dc5-da0f2540e49d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:30:03 crc kubenswrapper[4678]: I1124 12:30:03.880451 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-kube-api-access-qm48f" (OuterVolumeSpecName: "kube-api-access-qm48f") pod "5444d6db-2b72-4ef3-8dc5-da0f2540e49d" (UID: "5444d6db-2b72-4ef3-8dc5-da0f2540e49d"). InnerVolumeSpecName "kube-api-access-qm48f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:03 crc kubenswrapper[4678]: I1124 12:30:03.880581 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5444d6db-2b72-4ef3-8dc5-da0f2540e49d" (UID: "5444d6db-2b72-4ef3-8dc5-da0f2540e49d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:03 crc kubenswrapper[4678]: I1124 12:30:03.973732 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm48f\" (UniqueName: \"kubernetes.io/projected/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-kube-api-access-qm48f\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:03 crc kubenswrapper[4678]: I1124 12:30:03.973930 4678 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:03 crc kubenswrapper[4678]: I1124 12:30:03.974541 4678 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5444d6db-2b72-4ef3-8dc5-da0f2540e49d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:04 crc kubenswrapper[4678]: I1124 12:30:04.290892 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" event={"ID":"5444d6db-2b72-4ef3-8dc5-da0f2540e49d","Type":"ContainerDied","Data":"85a170db6880678dafca855a84c7225e6027d7ba38f260a411cc18e3fe95f1f2"} Nov 24 12:30:04 crc kubenswrapper[4678]: I1124 12:30:04.291215 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85a170db6880678dafca855a84c7225e6027d7ba38f260a411cc18e3fe95f1f2" Nov 24 12:30:04 crc kubenswrapper[4678]: I1124 12:30:04.290984 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x" Nov 24 12:30:04 crc kubenswrapper[4678]: I1124 12:30:04.349796 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx"] Nov 24 12:30:04 crc kubenswrapper[4678]: I1124 12:30:04.381192 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399745-8sqgx"] Nov 24 12:30:05 crc kubenswrapper[4678]: I1124 12:30:05.920726 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67504c69-9aa8-4c55-8e64-fbb6291254e5" path="/var/lib/kubelet/pods/67504c69-9aa8-4c55-8e64-fbb6291254e5/volumes" Nov 24 12:30:09 crc kubenswrapper[4678]: I1124 12:30:09.905095 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:30:09 crc kubenswrapper[4678]: E1124 12:30:09.906078 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:30:23 crc kubenswrapper[4678]: I1124 12:30:23.896836 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:30:23 crc kubenswrapper[4678]: E1124 12:30:23.898846 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:30:38 crc kubenswrapper[4678]: I1124 12:30:38.869441 4678 scope.go:117] "RemoveContainer" containerID="1ded26ce454bc6632d25fe34ccf49bcf3287ac60447eba91aa7c5f521fc616e8" Nov 24 12:30:38 crc kubenswrapper[4678]: I1124 12:30:38.896362 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:30:38 crc kubenswrapper[4678]: E1124 12:30:38.896601 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:30:49 crc kubenswrapper[4678]: I1124 12:30:49.915830 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:30:49 crc kubenswrapper[4678]: E1124 12:30:49.916772 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:31:03 crc kubenswrapper[4678]: I1124 12:31:03.897451 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:31:03 crc kubenswrapper[4678]: E1124 12:31:03.898469 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:31:17 crc kubenswrapper[4678]: I1124 12:31:17.897222 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:31:17 crc kubenswrapper[4678]: E1124 12:31:17.898339 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:31:31 crc kubenswrapper[4678]: I1124 12:31:31.896503 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:31:31 crc kubenswrapper[4678]: E1124 12:31:31.898017 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:31:44 crc kubenswrapper[4678]: I1124 12:31:44.896221 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:31:44 crc kubenswrapper[4678]: E1124 12:31:44.897892 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:31:58 crc kubenswrapper[4678]: I1124 12:31:58.897418 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:31:58 crc kubenswrapper[4678]: E1124 12:31:58.898560 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:32:10 crc kubenswrapper[4678]: I1124 12:32:10.896494 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:32:11 crc kubenswrapper[4678]: I1124 12:32:11.791130 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"1a31ffc07ae0861b29ae078376eb8ee33d353d6ff9acd31e0ceddd331c47e09e"} Nov 24 12:33:12 crc kubenswrapper[4678]: I1124 12:33:12.737101 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-84swn"] Nov 24 12:33:12 crc kubenswrapper[4678]: E1124 12:33:12.738798 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5444d6db-2b72-4ef3-8dc5-da0f2540e49d" containerName="collect-profiles" Nov 24 12:33:12 crc kubenswrapper[4678]: I1124 12:33:12.738818 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5444d6db-2b72-4ef3-8dc5-da0f2540e49d" containerName="collect-profiles" Nov 24 12:33:12 crc kubenswrapper[4678]: I1124 12:33:12.739108 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="5444d6db-2b72-4ef3-8dc5-da0f2540e49d" containerName="collect-profiles" Nov 24 12:33:12 crc kubenswrapper[4678]: I1124 12:33:12.741628 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:12 crc kubenswrapper[4678]: I1124 12:33:12.750374 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-84swn"] Nov 24 12:33:12 crc kubenswrapper[4678]: I1124 12:33:12.820275 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c207729b-b427-403a-a017-94680b44f9c6-catalog-content\") pod \"redhat-marketplace-84swn\" (UID: \"c207729b-b427-403a-a017-94680b44f9c6\") " pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:12 crc kubenswrapper[4678]: I1124 12:33:12.820425 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c207729b-b427-403a-a017-94680b44f9c6-utilities\") pod \"redhat-marketplace-84swn\" (UID: \"c207729b-b427-403a-a017-94680b44f9c6\") " pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:12 crc kubenswrapper[4678]: I1124 12:33:12.820912 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vghf\" (UniqueName: \"kubernetes.io/projected/c207729b-b427-403a-a017-94680b44f9c6-kube-api-access-6vghf\") pod \"redhat-marketplace-84swn\" (UID: \"c207729b-b427-403a-a017-94680b44f9c6\") " pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:12 crc kubenswrapper[4678]: I1124 12:33:12.924155 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c207729b-b427-403a-a017-94680b44f9c6-utilities\") pod \"redhat-marketplace-84swn\" (UID: \"c207729b-b427-403a-a017-94680b44f9c6\") " pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:12 crc kubenswrapper[4678]: I1124 12:33:12.924751 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vghf\" (UniqueName: \"kubernetes.io/projected/c207729b-b427-403a-a017-94680b44f9c6-kube-api-access-6vghf\") pod \"redhat-marketplace-84swn\" (UID: \"c207729b-b427-403a-a017-94680b44f9c6\") " pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:12 crc kubenswrapper[4678]: I1124 12:33:12.924973 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c207729b-b427-403a-a017-94680b44f9c6-catalog-content\") pod \"redhat-marketplace-84swn\" (UID: \"c207729b-b427-403a-a017-94680b44f9c6\") " pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:12 crc kubenswrapper[4678]: I1124 12:33:12.925654 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c207729b-b427-403a-a017-94680b44f9c6-catalog-content\") pod \"redhat-marketplace-84swn\" (UID: \"c207729b-b427-403a-a017-94680b44f9c6\") " pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:12 crc kubenswrapper[4678]: I1124 12:33:12.926285 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c207729b-b427-403a-a017-94680b44f9c6-utilities\") pod \"redhat-marketplace-84swn\" (UID: \"c207729b-b427-403a-a017-94680b44f9c6\") " pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:12 crc kubenswrapper[4678]: I1124 12:33:12.948557 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vghf\" (UniqueName: \"kubernetes.io/projected/c207729b-b427-403a-a017-94680b44f9c6-kube-api-access-6vghf\") pod \"redhat-marketplace-84swn\" (UID: \"c207729b-b427-403a-a017-94680b44f9c6\") " pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:13 crc kubenswrapper[4678]: I1124 12:33:13.080566 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:13 crc kubenswrapper[4678]: I1124 12:33:13.626167 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-84swn"] Nov 24 12:33:14 crc kubenswrapper[4678]: I1124 12:33:14.618719 4678 generic.go:334] "Generic (PLEG): container finished" podID="c207729b-b427-403a-a017-94680b44f9c6" containerID="ad3ebdf5ade5d04c78f02fa3984c27d35a0fbfc44bb102563db3e579d706ae76" exitCode=0 Nov 24 12:33:14 crc kubenswrapper[4678]: I1124 12:33:14.618834 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84swn" event={"ID":"c207729b-b427-403a-a017-94680b44f9c6","Type":"ContainerDied","Data":"ad3ebdf5ade5d04c78f02fa3984c27d35a0fbfc44bb102563db3e579d706ae76"} Nov 24 12:33:14 crc kubenswrapper[4678]: I1124 12:33:14.619318 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84swn" event={"ID":"c207729b-b427-403a-a017-94680b44f9c6","Type":"ContainerStarted","Data":"05d3f3fe7f9d7fbeaa30990100218f906a5a00bc5c66d75f2a4ba60d5aaa4fbe"} Nov 24 12:33:14 crc kubenswrapper[4678]: I1124 12:33:14.621518 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:33:15 crc kubenswrapper[4678]: I1124 12:33:15.637897 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84swn" event={"ID":"c207729b-b427-403a-a017-94680b44f9c6","Type":"ContainerStarted","Data":"3ea57ba7ce7429302dde0149b9b9d17516ebd8096fd59dc0aa4fc74863c92b3b"} Nov 24 12:33:16 crc kubenswrapper[4678]: I1124 12:33:16.654228 4678 generic.go:334] "Generic (PLEG): container finished" podID="c207729b-b427-403a-a017-94680b44f9c6" containerID="3ea57ba7ce7429302dde0149b9b9d17516ebd8096fd59dc0aa4fc74863c92b3b" exitCode=0 Nov 24 12:33:16 crc kubenswrapper[4678]: I1124 12:33:16.654364 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84swn" event={"ID":"c207729b-b427-403a-a017-94680b44f9c6","Type":"ContainerDied","Data":"3ea57ba7ce7429302dde0149b9b9d17516ebd8096fd59dc0aa4fc74863c92b3b"} Nov 24 12:33:17 crc kubenswrapper[4678]: I1124 12:33:17.669397 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84swn" event={"ID":"c207729b-b427-403a-a017-94680b44f9c6","Type":"ContainerStarted","Data":"1bc568374828f3678142cbd8883ecc6be614c0d8eca97846a216635e1f595308"} Nov 24 12:33:17 crc kubenswrapper[4678]: I1124 12:33:17.705162 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-84swn" podStartSLOduration=3.192118081 podStartE2EDuration="5.705133765s" podCreationTimestamp="2025-11-24 12:33:12 +0000 UTC" firstStartedPulling="2025-11-24 12:33:14.621252548 +0000 UTC m=+4605.552312197" lastFinishedPulling="2025-11-24 12:33:17.134268242 +0000 UTC m=+4608.065327881" observedRunningTime="2025-11-24 12:33:17.690569624 +0000 UTC m=+4608.621629273" watchObservedRunningTime="2025-11-24 12:33:17.705133765 +0000 UTC m=+4608.636193404" Nov 24 12:33:23 crc kubenswrapper[4678]: I1124 12:33:23.081423 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:23 crc kubenswrapper[4678]: I1124 12:33:23.082303 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:23 crc kubenswrapper[4678]: I1124 12:33:23.139001 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:23 crc kubenswrapper[4678]: I1124 12:33:23.794938 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:23 crc kubenswrapper[4678]: I1124 12:33:23.886045 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-84swn"] Nov 24 12:33:25 crc kubenswrapper[4678]: I1124 12:33:25.765732 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-84swn" podUID="c207729b-b427-403a-a017-94680b44f9c6" containerName="registry-server" containerID="cri-o://1bc568374828f3678142cbd8883ecc6be614c0d8eca97846a216635e1f595308" gracePeriod=2 Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.349684 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.445035 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c207729b-b427-403a-a017-94680b44f9c6-catalog-content\") pod \"c207729b-b427-403a-a017-94680b44f9c6\" (UID: \"c207729b-b427-403a-a017-94680b44f9c6\") " Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.445286 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c207729b-b427-403a-a017-94680b44f9c6-utilities\") pod \"c207729b-b427-403a-a017-94680b44f9c6\" (UID: \"c207729b-b427-403a-a017-94680b44f9c6\") " Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.445358 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vghf\" (UniqueName: \"kubernetes.io/projected/c207729b-b427-403a-a017-94680b44f9c6-kube-api-access-6vghf\") pod \"c207729b-b427-403a-a017-94680b44f9c6\" (UID: \"c207729b-b427-403a-a017-94680b44f9c6\") " Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.446437 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c207729b-b427-403a-a017-94680b44f9c6-utilities" (OuterVolumeSpecName: "utilities") pod "c207729b-b427-403a-a017-94680b44f9c6" (UID: "c207729b-b427-403a-a017-94680b44f9c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.451894 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c207729b-b427-403a-a017-94680b44f9c6-kube-api-access-6vghf" (OuterVolumeSpecName: "kube-api-access-6vghf") pod "c207729b-b427-403a-a017-94680b44f9c6" (UID: "c207729b-b427-403a-a017-94680b44f9c6"). InnerVolumeSpecName "kube-api-access-6vghf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.460729 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c207729b-b427-403a-a017-94680b44f9c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c207729b-b427-403a-a017-94680b44f9c6" (UID: "c207729b-b427-403a-a017-94680b44f9c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.548105 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c207729b-b427-403a-a017-94680b44f9c6-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.548177 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vghf\" (UniqueName: \"kubernetes.io/projected/c207729b-b427-403a-a017-94680b44f9c6-kube-api-access-6vghf\") on node \"crc\" DevicePath \"\"" Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.548189 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c207729b-b427-403a-a017-94680b44f9c6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.779198 4678 generic.go:334] "Generic (PLEG): container finished" podID="c207729b-b427-403a-a017-94680b44f9c6" containerID="1bc568374828f3678142cbd8883ecc6be614c0d8eca97846a216635e1f595308" exitCode=0 Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.779308 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-84swn" Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.779309 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84swn" event={"ID":"c207729b-b427-403a-a017-94680b44f9c6","Type":"ContainerDied","Data":"1bc568374828f3678142cbd8883ecc6be614c0d8eca97846a216635e1f595308"} Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.779736 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84swn" event={"ID":"c207729b-b427-403a-a017-94680b44f9c6","Type":"ContainerDied","Data":"05d3f3fe7f9d7fbeaa30990100218f906a5a00bc5c66d75f2a4ba60d5aaa4fbe"} Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.779762 4678 scope.go:117] "RemoveContainer" containerID="1bc568374828f3678142cbd8883ecc6be614c0d8eca97846a216635e1f595308" Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.804921 4678 scope.go:117] "RemoveContainer" containerID="3ea57ba7ce7429302dde0149b9b9d17516ebd8096fd59dc0aa4fc74863c92b3b" Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.823452 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-84swn"] Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.847151 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-84swn"] Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.847844 4678 scope.go:117] "RemoveContainer" containerID="ad3ebdf5ade5d04c78f02fa3984c27d35a0fbfc44bb102563db3e579d706ae76" Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.893401 4678 scope.go:117] "RemoveContainer" containerID="1bc568374828f3678142cbd8883ecc6be614c0d8eca97846a216635e1f595308" Nov 24 12:33:26 crc kubenswrapper[4678]: E1124 12:33:26.897417 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bc568374828f3678142cbd8883ecc6be614c0d8eca97846a216635e1f595308\": container with ID starting with 1bc568374828f3678142cbd8883ecc6be614c0d8eca97846a216635e1f595308 not found: ID does not exist" containerID="1bc568374828f3678142cbd8883ecc6be614c0d8eca97846a216635e1f595308" Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.897473 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bc568374828f3678142cbd8883ecc6be614c0d8eca97846a216635e1f595308"} err="failed to get container status \"1bc568374828f3678142cbd8883ecc6be614c0d8eca97846a216635e1f595308\": rpc error: code = NotFound desc = could not find container \"1bc568374828f3678142cbd8883ecc6be614c0d8eca97846a216635e1f595308\": container with ID starting with 1bc568374828f3678142cbd8883ecc6be614c0d8eca97846a216635e1f595308 not found: ID does not exist" Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.897505 4678 scope.go:117] "RemoveContainer" containerID="3ea57ba7ce7429302dde0149b9b9d17516ebd8096fd59dc0aa4fc74863c92b3b" Nov 24 12:33:26 crc kubenswrapper[4678]: E1124 12:33:26.897882 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea57ba7ce7429302dde0149b9b9d17516ebd8096fd59dc0aa4fc74863c92b3b\": container with ID starting with 3ea57ba7ce7429302dde0149b9b9d17516ebd8096fd59dc0aa4fc74863c92b3b not found: ID does not exist" containerID="3ea57ba7ce7429302dde0149b9b9d17516ebd8096fd59dc0aa4fc74863c92b3b" Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.897932 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea57ba7ce7429302dde0149b9b9d17516ebd8096fd59dc0aa4fc74863c92b3b"} err="failed to get container status \"3ea57ba7ce7429302dde0149b9b9d17516ebd8096fd59dc0aa4fc74863c92b3b\": rpc error: code = NotFound desc = could not find container \"3ea57ba7ce7429302dde0149b9b9d17516ebd8096fd59dc0aa4fc74863c92b3b\": container with ID starting with 3ea57ba7ce7429302dde0149b9b9d17516ebd8096fd59dc0aa4fc74863c92b3b not found: ID does not exist" Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.897964 4678 scope.go:117] "RemoveContainer" containerID="ad3ebdf5ade5d04c78f02fa3984c27d35a0fbfc44bb102563db3e579d706ae76" Nov 24 12:33:26 crc kubenswrapper[4678]: E1124 12:33:26.898821 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad3ebdf5ade5d04c78f02fa3984c27d35a0fbfc44bb102563db3e579d706ae76\": container with ID starting with ad3ebdf5ade5d04c78f02fa3984c27d35a0fbfc44bb102563db3e579d706ae76 not found: ID does not exist" containerID="ad3ebdf5ade5d04c78f02fa3984c27d35a0fbfc44bb102563db3e579d706ae76" Nov 24 12:33:26 crc kubenswrapper[4678]: I1124 12:33:26.898884 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad3ebdf5ade5d04c78f02fa3984c27d35a0fbfc44bb102563db3e579d706ae76"} err="failed to get container status \"ad3ebdf5ade5d04c78f02fa3984c27d35a0fbfc44bb102563db3e579d706ae76\": rpc error: code = NotFound desc = could not find container \"ad3ebdf5ade5d04c78f02fa3984c27d35a0fbfc44bb102563db3e579d706ae76\": container with ID starting with ad3ebdf5ade5d04c78f02fa3984c27d35a0fbfc44bb102563db3e579d706ae76 not found: ID does not exist" Nov 24 12:33:27 crc kubenswrapper[4678]: I1124 12:33:27.910701 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c207729b-b427-403a-a017-94680b44f9c6" path="/var/lib/kubelet/pods/c207729b-b427-403a-a017-94680b44f9c6/volumes" Nov 24 12:33:43 crc kubenswrapper[4678]: I1124 12:33:43.963941 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c8sx2"] Nov 24 12:33:43 crc kubenswrapper[4678]: E1124 12:33:43.965165 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c207729b-b427-403a-a017-94680b44f9c6" containerName="extract-utilities" Nov 24 12:33:43 crc kubenswrapper[4678]: I1124 12:33:43.965186 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c207729b-b427-403a-a017-94680b44f9c6" containerName="extract-utilities" Nov 24 12:33:43 crc kubenswrapper[4678]: E1124 12:33:43.965229 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c207729b-b427-403a-a017-94680b44f9c6" containerName="registry-server" Nov 24 12:33:43 crc kubenswrapper[4678]: I1124 12:33:43.965235 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c207729b-b427-403a-a017-94680b44f9c6" containerName="registry-server" Nov 24 12:33:43 crc kubenswrapper[4678]: E1124 12:33:43.965263 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c207729b-b427-403a-a017-94680b44f9c6" containerName="extract-content" Nov 24 12:33:43 crc kubenswrapper[4678]: I1124 12:33:43.965269 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c207729b-b427-403a-a017-94680b44f9c6" containerName="extract-content" Nov 24 12:33:43 crc kubenswrapper[4678]: I1124 12:33:43.965483 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="c207729b-b427-403a-a017-94680b44f9c6" containerName="registry-server" Nov 24 12:33:43 crc kubenswrapper[4678]: I1124 12:33:43.967413 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:33:43 crc kubenswrapper[4678]: I1124 12:33:43.981215 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c8sx2"] Nov 24 12:33:44 crc kubenswrapper[4678]: I1124 12:33:44.014859 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jgz7\" (UniqueName: \"kubernetes.io/projected/430a2854-ac30-4b6f-8bbf-46085fb2d694-kube-api-access-7jgz7\") pod \"community-operators-c8sx2\" (UID: \"430a2854-ac30-4b6f-8bbf-46085fb2d694\") " pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:33:44 crc kubenswrapper[4678]: I1124 12:33:44.015091 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/430a2854-ac30-4b6f-8bbf-46085fb2d694-catalog-content\") pod \"community-operators-c8sx2\" (UID: \"430a2854-ac30-4b6f-8bbf-46085fb2d694\") " pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:33:44 crc kubenswrapper[4678]: I1124 12:33:44.015266 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/430a2854-ac30-4b6f-8bbf-46085fb2d694-utilities\") pod \"community-operators-c8sx2\" (UID: \"430a2854-ac30-4b6f-8bbf-46085fb2d694\") " pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:33:44 crc kubenswrapper[4678]: I1124 12:33:44.117618 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jgz7\" (UniqueName: \"kubernetes.io/projected/430a2854-ac30-4b6f-8bbf-46085fb2d694-kube-api-access-7jgz7\") pod \"community-operators-c8sx2\" (UID: \"430a2854-ac30-4b6f-8bbf-46085fb2d694\") " pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:33:44 crc kubenswrapper[4678]: I1124 12:33:44.117749 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/430a2854-ac30-4b6f-8bbf-46085fb2d694-catalog-content\") pod \"community-operators-c8sx2\" (UID: \"430a2854-ac30-4b6f-8bbf-46085fb2d694\") " pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:33:44 crc kubenswrapper[4678]: I1124 12:33:44.117879 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/430a2854-ac30-4b6f-8bbf-46085fb2d694-utilities\") pod \"community-operators-c8sx2\" (UID: \"430a2854-ac30-4b6f-8bbf-46085fb2d694\") " pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:33:44 crc kubenswrapper[4678]: I1124 12:33:44.118305 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/430a2854-ac30-4b6f-8bbf-46085fb2d694-catalog-content\") pod \"community-operators-c8sx2\" (UID: \"430a2854-ac30-4b6f-8bbf-46085fb2d694\") " pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:33:44 crc kubenswrapper[4678]: I1124 12:33:44.118343 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/430a2854-ac30-4b6f-8bbf-46085fb2d694-utilities\") pod \"community-operators-c8sx2\" (UID: \"430a2854-ac30-4b6f-8bbf-46085fb2d694\") " pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:33:44 crc kubenswrapper[4678]: I1124 12:33:44.145703 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jgz7\" (UniqueName: \"kubernetes.io/projected/430a2854-ac30-4b6f-8bbf-46085fb2d694-kube-api-access-7jgz7\") pod \"community-operators-c8sx2\" (UID: \"430a2854-ac30-4b6f-8bbf-46085fb2d694\") " pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:33:44 crc kubenswrapper[4678]: I1124 12:33:44.295485 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:33:44 crc kubenswrapper[4678]: I1124 12:33:44.893066 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c8sx2"] Nov 24 12:33:45 crc kubenswrapper[4678]: I1124 12:33:45.012974 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8sx2" event={"ID":"430a2854-ac30-4b6f-8bbf-46085fb2d694","Type":"ContainerStarted","Data":"45633f0fa93b5f68367bced5dc9ae9cd854dbcfa8619957b6312cc8f3a4d6cb3"} Nov 24 12:33:46 crc kubenswrapper[4678]: I1124 12:33:46.024790 4678 generic.go:334] "Generic (PLEG): container finished" podID="430a2854-ac30-4b6f-8bbf-46085fb2d694" containerID="286b1176a8e13257f5c78e4b89b7cdb41fe60e2918683b1236b36f7b7c79cbbb" exitCode=0 Nov 24 12:33:46 crc kubenswrapper[4678]: I1124 12:33:46.025106 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8sx2" event={"ID":"430a2854-ac30-4b6f-8bbf-46085fb2d694","Type":"ContainerDied","Data":"286b1176a8e13257f5c78e4b89b7cdb41fe60e2918683b1236b36f7b7c79cbbb"} Nov 24 12:33:48 crc kubenswrapper[4678]: I1124 12:33:48.056878 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8sx2" event={"ID":"430a2854-ac30-4b6f-8bbf-46085fb2d694","Type":"ContainerStarted","Data":"88456e5273c523b05885009537b9e4ada30e0492ae75be908c73097173dae248"} Nov 24 12:33:51 crc kubenswrapper[4678]: I1124 12:33:51.091757 4678 generic.go:334] "Generic (PLEG): container finished" podID="430a2854-ac30-4b6f-8bbf-46085fb2d694" containerID="88456e5273c523b05885009537b9e4ada30e0492ae75be908c73097173dae248" exitCode=0 Nov 24 12:33:51 crc kubenswrapper[4678]: I1124 12:33:51.091842 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8sx2" event={"ID":"430a2854-ac30-4b6f-8bbf-46085fb2d694","Type":"ContainerDied","Data":"88456e5273c523b05885009537b9e4ada30e0492ae75be908c73097173dae248"} Nov 24 12:33:53 crc kubenswrapper[4678]: I1124 12:33:53.120502 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8sx2" event={"ID":"430a2854-ac30-4b6f-8bbf-46085fb2d694","Type":"ContainerStarted","Data":"e9dacdb67c2801a3fc1ab1692755b1337cae8cf5ef1301a1d29f62400343b2b2"} Nov 24 12:33:53 crc kubenswrapper[4678]: I1124 12:33:53.151522 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c8sx2" podStartSLOduration=4.65280671 podStartE2EDuration="10.151499268s" podCreationTimestamp="2025-11-24 12:33:43 +0000 UTC" firstStartedPulling="2025-11-24 12:33:46.027147633 +0000 UTC m=+4636.958207272" lastFinishedPulling="2025-11-24 12:33:51.525840191 +0000 UTC m=+4642.456899830" observedRunningTime="2025-11-24 12:33:53.142821576 +0000 UTC m=+4644.073881235" watchObservedRunningTime="2025-11-24 12:33:53.151499268 +0000 UTC m=+4644.082558907" Nov 24 12:33:54 crc kubenswrapper[4678]: I1124 12:33:54.296440 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:33:54 crc kubenswrapper[4678]: I1124 12:33:54.296892 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:33:55 crc kubenswrapper[4678]: I1124 12:33:55.350761 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c8sx2" podUID="430a2854-ac30-4b6f-8bbf-46085fb2d694" containerName="registry-server" probeResult="failure" output=< Nov 24 12:33:55 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:33:55 crc kubenswrapper[4678]: > Nov 24 12:34:05 crc kubenswrapper[4678]: I1124 12:34:05.820866 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c8sx2" podUID="430a2854-ac30-4b6f-8bbf-46085fb2d694" containerName="registry-server" probeResult="failure" output=< Nov 24 12:34:05 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:34:05 crc kubenswrapper[4678]: > Nov 24 12:34:14 crc kubenswrapper[4678]: I1124 12:34:14.346951 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:34:14 crc kubenswrapper[4678]: I1124 12:34:14.399522 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:34:15 crc kubenswrapper[4678]: I1124 12:34:15.165387 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c8sx2"] Nov 24 12:34:15 crc kubenswrapper[4678]: I1124 12:34:15.395510 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c8sx2" podUID="430a2854-ac30-4b6f-8bbf-46085fb2d694" containerName="registry-server" containerID="cri-o://e9dacdb67c2801a3fc1ab1692755b1337cae8cf5ef1301a1d29f62400343b2b2" gracePeriod=2 Nov 24 12:34:15 crc kubenswrapper[4678]: I1124 12:34:15.908995 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.015906 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jgz7\" (UniqueName: \"kubernetes.io/projected/430a2854-ac30-4b6f-8bbf-46085fb2d694-kube-api-access-7jgz7\") pod \"430a2854-ac30-4b6f-8bbf-46085fb2d694\" (UID: \"430a2854-ac30-4b6f-8bbf-46085fb2d694\") " Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.016139 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/430a2854-ac30-4b6f-8bbf-46085fb2d694-utilities\") pod \"430a2854-ac30-4b6f-8bbf-46085fb2d694\" (UID: \"430a2854-ac30-4b6f-8bbf-46085fb2d694\") " Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.016563 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/430a2854-ac30-4b6f-8bbf-46085fb2d694-catalog-content\") pod \"430a2854-ac30-4b6f-8bbf-46085fb2d694\" (UID: \"430a2854-ac30-4b6f-8bbf-46085fb2d694\") " Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.017560 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/430a2854-ac30-4b6f-8bbf-46085fb2d694-utilities" (OuterVolumeSpecName: "utilities") pod "430a2854-ac30-4b6f-8bbf-46085fb2d694" (UID: "430a2854-ac30-4b6f-8bbf-46085fb2d694"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.018192 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/430a2854-ac30-4b6f-8bbf-46085fb2d694-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.021849 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/430a2854-ac30-4b6f-8bbf-46085fb2d694-kube-api-access-7jgz7" (OuterVolumeSpecName: "kube-api-access-7jgz7") pod "430a2854-ac30-4b6f-8bbf-46085fb2d694" (UID: "430a2854-ac30-4b6f-8bbf-46085fb2d694"). InnerVolumeSpecName "kube-api-access-7jgz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.077543 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/430a2854-ac30-4b6f-8bbf-46085fb2d694-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "430a2854-ac30-4b6f-8bbf-46085fb2d694" (UID: "430a2854-ac30-4b6f-8bbf-46085fb2d694"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.119966 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/430a2854-ac30-4b6f-8bbf-46085fb2d694-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.120009 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jgz7\" (UniqueName: \"kubernetes.io/projected/430a2854-ac30-4b6f-8bbf-46085fb2d694-kube-api-access-7jgz7\") on node \"crc\" DevicePath \"\"" Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.406973 4678 generic.go:334] "Generic (PLEG): container finished" podID="430a2854-ac30-4b6f-8bbf-46085fb2d694" containerID="e9dacdb67c2801a3fc1ab1692755b1337cae8cf5ef1301a1d29f62400343b2b2" exitCode=0 Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.407020 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8sx2" event={"ID":"430a2854-ac30-4b6f-8bbf-46085fb2d694","Type":"ContainerDied","Data":"e9dacdb67c2801a3fc1ab1692755b1337cae8cf5ef1301a1d29f62400343b2b2"} Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.407052 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8sx2" event={"ID":"430a2854-ac30-4b6f-8bbf-46085fb2d694","Type":"ContainerDied","Data":"45633f0fa93b5f68367bced5dc9ae9cd854dbcfa8619957b6312cc8f3a4d6cb3"} Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.407077 4678 scope.go:117] "RemoveContainer" containerID="e9dacdb67c2801a3fc1ab1692755b1337cae8cf5ef1301a1d29f62400343b2b2" Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.407793 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8sx2" Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.433862 4678 scope.go:117] "RemoveContainer" containerID="88456e5273c523b05885009537b9e4ada30e0492ae75be908c73097173dae248" Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.446688 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c8sx2"] Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.459842 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c8sx2"] Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.471483 4678 scope.go:117] "RemoveContainer" containerID="286b1176a8e13257f5c78e4b89b7cdb41fe60e2918683b1236b36f7b7c79cbbb" Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.513810 4678 scope.go:117] "RemoveContainer" containerID="e9dacdb67c2801a3fc1ab1692755b1337cae8cf5ef1301a1d29f62400343b2b2" Nov 24 12:34:16 crc kubenswrapper[4678]: E1124 12:34:16.514529 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9dacdb67c2801a3fc1ab1692755b1337cae8cf5ef1301a1d29f62400343b2b2\": container with ID starting with e9dacdb67c2801a3fc1ab1692755b1337cae8cf5ef1301a1d29f62400343b2b2 not found: ID does not exist" containerID="e9dacdb67c2801a3fc1ab1692755b1337cae8cf5ef1301a1d29f62400343b2b2" Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.514662 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9dacdb67c2801a3fc1ab1692755b1337cae8cf5ef1301a1d29f62400343b2b2"} err="failed to get container status \"e9dacdb67c2801a3fc1ab1692755b1337cae8cf5ef1301a1d29f62400343b2b2\": rpc error: code = NotFound desc = could not find container \"e9dacdb67c2801a3fc1ab1692755b1337cae8cf5ef1301a1d29f62400343b2b2\": container with ID starting with e9dacdb67c2801a3fc1ab1692755b1337cae8cf5ef1301a1d29f62400343b2b2 not found: ID does not exist" Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.514774 4678 scope.go:117] "RemoveContainer" containerID="88456e5273c523b05885009537b9e4ada30e0492ae75be908c73097173dae248" Nov 24 12:34:16 crc kubenswrapper[4678]: E1124 12:34:16.515326 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88456e5273c523b05885009537b9e4ada30e0492ae75be908c73097173dae248\": container with ID starting with 88456e5273c523b05885009537b9e4ada30e0492ae75be908c73097173dae248 not found: ID does not exist" containerID="88456e5273c523b05885009537b9e4ada30e0492ae75be908c73097173dae248" Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.515371 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88456e5273c523b05885009537b9e4ada30e0492ae75be908c73097173dae248"} err="failed to get container status \"88456e5273c523b05885009537b9e4ada30e0492ae75be908c73097173dae248\": rpc error: code = NotFound desc = could not find container \"88456e5273c523b05885009537b9e4ada30e0492ae75be908c73097173dae248\": container with ID starting with 88456e5273c523b05885009537b9e4ada30e0492ae75be908c73097173dae248 not found: ID does not exist" Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.515402 4678 scope.go:117] "RemoveContainer" containerID="286b1176a8e13257f5c78e4b89b7cdb41fe60e2918683b1236b36f7b7c79cbbb" Nov 24 12:34:16 crc kubenswrapper[4678]: E1124 12:34:16.515971 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286b1176a8e13257f5c78e4b89b7cdb41fe60e2918683b1236b36f7b7c79cbbb\": container with ID starting with 286b1176a8e13257f5c78e4b89b7cdb41fe60e2918683b1236b36f7b7c79cbbb not found: ID does not exist" containerID="286b1176a8e13257f5c78e4b89b7cdb41fe60e2918683b1236b36f7b7c79cbbb" Nov 24 12:34:16 crc kubenswrapper[4678]: I1124 12:34:16.516298 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286b1176a8e13257f5c78e4b89b7cdb41fe60e2918683b1236b36f7b7c79cbbb"} err="failed to get container status \"286b1176a8e13257f5c78e4b89b7cdb41fe60e2918683b1236b36f7b7c79cbbb\": rpc error: code = NotFound desc = could not find container \"286b1176a8e13257f5c78e4b89b7cdb41fe60e2918683b1236b36f7b7c79cbbb\": container with ID starting with 286b1176a8e13257f5c78e4b89b7cdb41fe60e2918683b1236b36f7b7c79cbbb not found: ID does not exist" Nov 24 12:34:17 crc kubenswrapper[4678]: I1124 12:34:17.912824 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="430a2854-ac30-4b6f-8bbf-46085fb2d694" path="/var/lib/kubelet/pods/430a2854-ac30-4b6f-8bbf-46085fb2d694/volumes" Nov 24 12:34:30 crc kubenswrapper[4678]: I1124 12:34:30.297496 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:34:30 crc kubenswrapper[4678]: I1124 12:34:30.297965 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:35:00 crc kubenswrapper[4678]: I1124 12:35:00.297185 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:35:00 crc kubenswrapper[4678]: I1124 12:35:00.297809 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:35:30 crc kubenswrapper[4678]: I1124 12:35:30.297069 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:35:30 crc kubenswrapper[4678]: I1124 12:35:30.297730 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:35:30 crc kubenswrapper[4678]: I1124 12:35:30.297800 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 12:35:30 crc kubenswrapper[4678]: I1124 12:35:30.298965 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a31ffc07ae0861b29ae078376eb8ee33d353d6ff9acd31e0ceddd331c47e09e"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:35:30 crc kubenswrapper[4678]: I1124 12:35:30.299035 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://1a31ffc07ae0861b29ae078376eb8ee33d353d6ff9acd31e0ceddd331c47e09e" gracePeriod=600 Nov 24 12:35:31 crc kubenswrapper[4678]: I1124 12:35:31.365005 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="1a31ffc07ae0861b29ae078376eb8ee33d353d6ff9acd31e0ceddd331c47e09e" exitCode=0 Nov 24 12:35:31 crc kubenswrapper[4678]: I1124 12:35:31.365101 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"1a31ffc07ae0861b29ae078376eb8ee33d353d6ff9acd31e0ceddd331c47e09e"} Nov 24 12:35:31 crc kubenswrapper[4678]: I1124 12:35:31.365990 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa"} Nov 24 12:35:31 crc kubenswrapper[4678]: I1124 12:35:31.366032 4678 scope.go:117] "RemoveContainer" containerID="362c12d015e32f21be4cd43ec382a42777ddd606054261a2842885b9f8d606cf" Nov 24 12:36:16 crc kubenswrapper[4678]: I1124 12:36:16.674383 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j42pm"] Nov 24 12:36:16 crc kubenswrapper[4678]: E1124 12:36:16.675605 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430a2854-ac30-4b6f-8bbf-46085fb2d694" containerName="extract-utilities" Nov 24 12:36:16 crc kubenswrapper[4678]: I1124 12:36:16.675620 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="430a2854-ac30-4b6f-8bbf-46085fb2d694" containerName="extract-utilities" Nov 24 12:36:16 crc kubenswrapper[4678]: E1124 12:36:16.675650 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430a2854-ac30-4b6f-8bbf-46085fb2d694" containerName="registry-server" Nov 24 12:36:16 crc kubenswrapper[4678]: I1124 12:36:16.675657 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="430a2854-ac30-4b6f-8bbf-46085fb2d694" containerName="registry-server" Nov 24 12:36:16 crc kubenswrapper[4678]: E1124 12:36:16.675705 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430a2854-ac30-4b6f-8bbf-46085fb2d694" containerName="extract-content" Nov 24 12:36:16 crc kubenswrapper[4678]: I1124 12:36:16.675712 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="430a2854-ac30-4b6f-8bbf-46085fb2d694" containerName="extract-content" Nov 24 12:36:16 crc kubenswrapper[4678]: I1124 12:36:16.675973 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="430a2854-ac30-4b6f-8bbf-46085fb2d694" containerName="registry-server" Nov 24 12:36:16 crc kubenswrapper[4678]: I1124 12:36:16.678103 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:36:16 crc kubenswrapper[4678]: I1124 12:36:16.691399 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j42pm"] Nov 24 12:36:16 crc kubenswrapper[4678]: I1124 12:36:16.825842 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wkdk\" (UniqueName: \"kubernetes.io/projected/2d1c213d-ab00-49d9-810c-198631093f6c-kube-api-access-4wkdk\") pod \"redhat-operators-j42pm\" (UID: \"2d1c213d-ab00-49d9-810c-198631093f6c\") " pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:36:16 crc kubenswrapper[4678]: I1124 12:36:16.825910 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d1c213d-ab00-49d9-810c-198631093f6c-catalog-content\") pod \"redhat-operators-j42pm\" (UID: \"2d1c213d-ab00-49d9-810c-198631093f6c\") " pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:36:16 crc kubenswrapper[4678]: I1124 12:36:16.825933 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d1c213d-ab00-49d9-810c-198631093f6c-utilities\") pod \"redhat-operators-j42pm\" (UID: \"2d1c213d-ab00-49d9-810c-198631093f6c\") " pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:36:16 crc kubenswrapper[4678]: I1124 12:36:16.929471 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wkdk\" (UniqueName: \"kubernetes.io/projected/2d1c213d-ab00-49d9-810c-198631093f6c-kube-api-access-4wkdk\") pod \"redhat-operators-j42pm\" (UID: \"2d1c213d-ab00-49d9-810c-198631093f6c\") " pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:36:16 crc kubenswrapper[4678]: I1124 12:36:16.930089 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d1c213d-ab00-49d9-810c-198631093f6c-utilities\") pod \"redhat-operators-j42pm\" (UID: \"2d1c213d-ab00-49d9-810c-198631093f6c\") " pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:36:16 crc kubenswrapper[4678]: I1124 12:36:16.930198 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d1c213d-ab00-49d9-810c-198631093f6c-catalog-content\") pod \"redhat-operators-j42pm\" (UID: \"2d1c213d-ab00-49d9-810c-198631093f6c\") " pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:36:16 crc kubenswrapper[4678]: I1124 12:36:16.930801 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d1c213d-ab00-49d9-810c-198631093f6c-utilities\") pod \"redhat-operators-j42pm\" (UID: \"2d1c213d-ab00-49d9-810c-198631093f6c\") " pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:36:16 crc kubenswrapper[4678]: I1124 12:36:16.930846 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d1c213d-ab00-49d9-810c-198631093f6c-catalog-content\") pod \"redhat-operators-j42pm\" (UID: \"2d1c213d-ab00-49d9-810c-198631093f6c\") " pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:36:17 crc kubenswrapper[4678]: I1124 12:36:17.382312 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wkdk\" (UniqueName: \"kubernetes.io/projected/2d1c213d-ab00-49d9-810c-198631093f6c-kube-api-access-4wkdk\") pod \"redhat-operators-j42pm\" (UID: \"2d1c213d-ab00-49d9-810c-198631093f6c\") " pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:36:17 crc kubenswrapper[4678]: I1124 12:36:17.603741 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:36:18 crc kubenswrapper[4678]: I1124 12:36:18.131171 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j42pm"] Nov 24 12:36:18 crc kubenswrapper[4678]: I1124 12:36:18.912731 4678 generic.go:334] "Generic (PLEG): container finished" podID="2d1c213d-ab00-49d9-810c-198631093f6c" containerID="fc18e8fd44c28f9fed95ee80d681f80914669d109794addcba1640fb212eb044" exitCode=0 Nov 24 12:36:18 crc kubenswrapper[4678]: I1124 12:36:18.912855 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j42pm" event={"ID":"2d1c213d-ab00-49d9-810c-198631093f6c","Type":"ContainerDied","Data":"fc18e8fd44c28f9fed95ee80d681f80914669d109794addcba1640fb212eb044"} Nov 24 12:36:18 crc kubenswrapper[4678]: I1124 12:36:18.913257 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j42pm" event={"ID":"2d1c213d-ab00-49d9-810c-198631093f6c","Type":"ContainerStarted","Data":"cbda37ecf6edd08e5f0103bb3b77e42c5a922e6e7501bd1f47b86bb0a7de783d"} Nov 24 12:36:20 crc kubenswrapper[4678]: I1124 12:36:20.941297 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j42pm" event={"ID":"2d1c213d-ab00-49d9-810c-198631093f6c","Type":"ContainerStarted","Data":"8f46b64a004262e7b38b333ea411c20f337c8a028a819afd0241bef0205e46c8"} Nov 24 12:36:27 crc kubenswrapper[4678]: I1124 12:36:27.015124 4678 generic.go:334] "Generic (PLEG): container finished" podID="2d1c213d-ab00-49d9-810c-198631093f6c" containerID="8f46b64a004262e7b38b333ea411c20f337c8a028a819afd0241bef0205e46c8" exitCode=0 Nov 24 12:36:27 crc kubenswrapper[4678]: I1124 12:36:27.015165 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j42pm" event={"ID":"2d1c213d-ab00-49d9-810c-198631093f6c","Type":"ContainerDied","Data":"8f46b64a004262e7b38b333ea411c20f337c8a028a819afd0241bef0205e46c8"} Nov 24 12:36:29 crc kubenswrapper[4678]: I1124 12:36:29.041511 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j42pm" event={"ID":"2d1c213d-ab00-49d9-810c-198631093f6c","Type":"ContainerStarted","Data":"b9f2197d127677ae34eedef5d5a4611a4e48a53ae3ad3117db0e048360e105d9"} Nov 24 12:36:29 crc kubenswrapper[4678]: I1124 12:36:29.069911 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j42pm" podStartSLOduration=4.234764091 podStartE2EDuration="13.069865184s" podCreationTimestamp="2025-11-24 12:36:16 +0000 UTC" firstStartedPulling="2025-11-24 12:36:18.916328049 +0000 UTC m=+4789.847387688" lastFinishedPulling="2025-11-24 12:36:27.751429142 +0000 UTC m=+4798.682488781" observedRunningTime="2025-11-24 12:36:29.060863362 +0000 UTC m=+4799.991923001" watchObservedRunningTime="2025-11-24 12:36:29.069865184 +0000 UTC m=+4800.000924823" Nov 24 12:36:37 crc kubenswrapper[4678]: I1124 12:36:37.604263 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:36:37 crc kubenswrapper[4678]: I1124 12:36:37.605091 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:36:38 crc kubenswrapper[4678]: I1124 12:36:38.661014 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j42pm" podUID="2d1c213d-ab00-49d9-810c-198631093f6c" containerName="registry-server" probeResult="failure" output=< Nov 24 12:36:38 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:36:38 crc kubenswrapper[4678]: > Nov 24 12:36:48 crc kubenswrapper[4678]: I1124 12:36:48.654662 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j42pm" podUID="2d1c213d-ab00-49d9-810c-198631093f6c" containerName="registry-server" probeResult="failure" output=< Nov 24 12:36:48 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:36:48 crc kubenswrapper[4678]: > Nov 24 12:36:57 crc kubenswrapper[4678]: I1124 12:36:57.661377 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:36:57 crc kubenswrapper[4678]: I1124 12:36:57.728322 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:36:57 crc kubenswrapper[4678]: I1124 12:36:57.913871 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j42pm"] Nov 24 12:36:59 crc kubenswrapper[4678]: I1124 12:36:59.396523 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j42pm" podUID="2d1c213d-ab00-49d9-810c-198631093f6c" containerName="registry-server" containerID="cri-o://b9f2197d127677ae34eedef5d5a4611a4e48a53ae3ad3117db0e048360e105d9" gracePeriod=2 Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.014494 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.112811 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d1c213d-ab00-49d9-810c-198631093f6c-catalog-content\") pod \"2d1c213d-ab00-49d9-810c-198631093f6c\" (UID: \"2d1c213d-ab00-49d9-810c-198631093f6c\") " Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.113001 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d1c213d-ab00-49d9-810c-198631093f6c-utilities\") pod \"2d1c213d-ab00-49d9-810c-198631093f6c\" (UID: \"2d1c213d-ab00-49d9-810c-198631093f6c\") " Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.113130 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wkdk\" (UniqueName: \"kubernetes.io/projected/2d1c213d-ab00-49d9-810c-198631093f6c-kube-api-access-4wkdk\") pod \"2d1c213d-ab00-49d9-810c-198631093f6c\" (UID: \"2d1c213d-ab00-49d9-810c-198631093f6c\") " Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.113869 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d1c213d-ab00-49d9-810c-198631093f6c-utilities" (OuterVolumeSpecName: "utilities") pod "2d1c213d-ab00-49d9-810c-198631093f6c" (UID: "2d1c213d-ab00-49d9-810c-198631093f6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.119818 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d1c213d-ab00-49d9-810c-198631093f6c-kube-api-access-4wkdk" (OuterVolumeSpecName: "kube-api-access-4wkdk") pod "2d1c213d-ab00-49d9-810c-198631093f6c" (UID: "2d1c213d-ab00-49d9-810c-198631093f6c"). InnerVolumeSpecName "kube-api-access-4wkdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.216556 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wkdk\" (UniqueName: \"kubernetes.io/projected/2d1c213d-ab00-49d9-810c-198631093f6c-kube-api-access-4wkdk\") on node \"crc\" DevicePath \"\"" Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.216603 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d1c213d-ab00-49d9-810c-198631093f6c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.256535 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d1c213d-ab00-49d9-810c-198631093f6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d1c213d-ab00-49d9-810c-198631093f6c" (UID: "2d1c213d-ab00-49d9-810c-198631093f6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.319950 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d1c213d-ab00-49d9-810c-198631093f6c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.410092 4678 generic.go:334] "Generic (PLEG): container finished" podID="2d1c213d-ab00-49d9-810c-198631093f6c" containerID="b9f2197d127677ae34eedef5d5a4611a4e48a53ae3ad3117db0e048360e105d9" exitCode=0 Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.410143 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j42pm" event={"ID":"2d1c213d-ab00-49d9-810c-198631093f6c","Type":"ContainerDied","Data":"b9f2197d127677ae34eedef5d5a4611a4e48a53ae3ad3117db0e048360e105d9"} Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.410154 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j42pm" Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.410176 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j42pm" event={"ID":"2d1c213d-ab00-49d9-810c-198631093f6c","Type":"ContainerDied","Data":"cbda37ecf6edd08e5f0103bb3b77e42c5a922e6e7501bd1f47b86bb0a7de783d"} Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.410199 4678 scope.go:117] "RemoveContainer" containerID="b9f2197d127677ae34eedef5d5a4611a4e48a53ae3ad3117db0e048360e105d9" Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.444762 4678 scope.go:117] "RemoveContainer" containerID="8f46b64a004262e7b38b333ea411c20f337c8a028a819afd0241bef0205e46c8" Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.454879 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j42pm"] Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.467179 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j42pm"] Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.479784 4678 scope.go:117] "RemoveContainer" containerID="fc18e8fd44c28f9fed95ee80d681f80914669d109794addcba1640fb212eb044" Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.541081 4678 scope.go:117] "RemoveContainer" containerID="b9f2197d127677ae34eedef5d5a4611a4e48a53ae3ad3117db0e048360e105d9" Nov 24 12:37:00 crc kubenswrapper[4678]: E1124 12:37:00.541604 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f2197d127677ae34eedef5d5a4611a4e48a53ae3ad3117db0e048360e105d9\": container with ID starting with b9f2197d127677ae34eedef5d5a4611a4e48a53ae3ad3117db0e048360e105d9 not found: ID does not exist" containerID="b9f2197d127677ae34eedef5d5a4611a4e48a53ae3ad3117db0e048360e105d9" Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.541646 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f2197d127677ae34eedef5d5a4611a4e48a53ae3ad3117db0e048360e105d9"} err="failed to get container status \"b9f2197d127677ae34eedef5d5a4611a4e48a53ae3ad3117db0e048360e105d9\": rpc error: code = NotFound desc = could not find container \"b9f2197d127677ae34eedef5d5a4611a4e48a53ae3ad3117db0e048360e105d9\": container with ID starting with b9f2197d127677ae34eedef5d5a4611a4e48a53ae3ad3117db0e048360e105d9 not found: ID does not exist" Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.541688 4678 scope.go:117] "RemoveContainer" containerID="8f46b64a004262e7b38b333ea411c20f337c8a028a819afd0241bef0205e46c8" Nov 24 12:37:00 crc kubenswrapper[4678]: E1124 12:37:00.541986 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f46b64a004262e7b38b333ea411c20f337c8a028a819afd0241bef0205e46c8\": container with ID starting with 8f46b64a004262e7b38b333ea411c20f337c8a028a819afd0241bef0205e46c8 not found: ID does not exist" containerID="8f46b64a004262e7b38b333ea411c20f337c8a028a819afd0241bef0205e46c8" Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.542028 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f46b64a004262e7b38b333ea411c20f337c8a028a819afd0241bef0205e46c8"} err="failed to get container status \"8f46b64a004262e7b38b333ea411c20f337c8a028a819afd0241bef0205e46c8\": rpc error: code = NotFound desc = could not find container \"8f46b64a004262e7b38b333ea411c20f337c8a028a819afd0241bef0205e46c8\": container with ID starting with 8f46b64a004262e7b38b333ea411c20f337c8a028a819afd0241bef0205e46c8 not found: ID does not exist" Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.542056 4678 scope.go:117] "RemoveContainer" containerID="fc18e8fd44c28f9fed95ee80d681f80914669d109794addcba1640fb212eb044" Nov 24 12:37:00 crc kubenswrapper[4678]: E1124 12:37:00.542276 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc18e8fd44c28f9fed95ee80d681f80914669d109794addcba1640fb212eb044\": container with ID starting with fc18e8fd44c28f9fed95ee80d681f80914669d109794addcba1640fb212eb044 not found: ID does not exist" containerID="fc18e8fd44c28f9fed95ee80d681f80914669d109794addcba1640fb212eb044" Nov 24 12:37:00 crc kubenswrapper[4678]: I1124 12:37:00.542297 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc18e8fd44c28f9fed95ee80d681f80914669d109794addcba1640fb212eb044"} err="failed to get container status \"fc18e8fd44c28f9fed95ee80d681f80914669d109794addcba1640fb212eb044\": rpc error: code = NotFound desc = could not find container \"fc18e8fd44c28f9fed95ee80d681f80914669d109794addcba1640fb212eb044\": container with ID starting with fc18e8fd44c28f9fed95ee80d681f80914669d109794addcba1640fb212eb044 not found: ID does not exist" Nov 24 12:37:01 crc kubenswrapper[4678]: I1124 12:37:01.909325 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d1c213d-ab00-49d9-810c-198631093f6c" path="/var/lib/kubelet/pods/2d1c213d-ab00-49d9-810c-198631093f6c/volumes" Nov 24 12:37:30 crc kubenswrapper[4678]: I1124 12:37:30.296986 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:37:30 crc kubenswrapper[4678]: I1124 12:37:30.297905 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:38:00 crc kubenswrapper[4678]: I1124 12:38:00.296796 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:38:00 crc kubenswrapper[4678]: I1124 12:38:00.297343 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:38:30 crc kubenswrapper[4678]: I1124 12:38:30.297324 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:38:30 crc kubenswrapper[4678]: I1124 12:38:30.297898 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:38:30 crc kubenswrapper[4678]: I1124 12:38:30.297956 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 12:38:30 crc kubenswrapper[4678]: I1124 12:38:30.298898 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:38:30 crc kubenswrapper[4678]: I1124 12:38:30.298946 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" gracePeriod=600 Nov 24 12:38:30 crc kubenswrapper[4678]: I1124 12:38:30.546887 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" exitCode=0 Nov 24 12:38:30 crc kubenswrapper[4678]: I1124 12:38:30.546961 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa"} Nov 24 12:38:30 crc kubenswrapper[4678]: I1124 12:38:30.547520 4678 scope.go:117] "RemoveContainer" containerID="1a31ffc07ae0861b29ae078376eb8ee33d353d6ff9acd31e0ceddd331c47e09e" Nov 24 12:38:30 crc kubenswrapper[4678]: E1124 12:38:30.934444 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:38:31 crc kubenswrapper[4678]: I1124 12:38:31.560005 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:38:31 crc kubenswrapper[4678]: E1124 12:38:31.560621 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:38:42 crc kubenswrapper[4678]: I1124 12:38:42.895651 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:38:42 crc kubenswrapper[4678]: E1124 12:38:42.896568 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.208459 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xlnn9"] Nov 24 12:38:46 crc kubenswrapper[4678]: E1124 12:38:46.210719 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1c213d-ab00-49d9-810c-198631093f6c" containerName="extract-utilities" Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.210751 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1c213d-ab00-49d9-810c-198631093f6c" containerName="extract-utilities" Nov 24 12:38:46 crc kubenswrapper[4678]: E1124 12:38:46.210782 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1c213d-ab00-49d9-810c-198631093f6c" containerName="registry-server" Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.210791 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1c213d-ab00-49d9-810c-198631093f6c" containerName="registry-server" Nov 24 12:38:46 crc kubenswrapper[4678]: E1124 12:38:46.210821 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1c213d-ab00-49d9-810c-198631093f6c" containerName="extract-content" Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.210831 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1c213d-ab00-49d9-810c-198631093f6c" containerName="extract-content" Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.211910 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1c213d-ab00-49d9-810c-198631093f6c" containerName="registry-server" Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.250244 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.254331 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xlnn9"] Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.344093 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhn8r\" (UniqueName: \"kubernetes.io/projected/5c509940-e78d-40fd-b5ae-e06ead922d8d-kube-api-access-jhn8r\") pod \"certified-operators-xlnn9\" (UID: \"5c509940-e78d-40fd-b5ae-e06ead922d8d\") " pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.344217 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c509940-e78d-40fd-b5ae-e06ead922d8d-catalog-content\") pod \"certified-operators-xlnn9\" (UID: \"5c509940-e78d-40fd-b5ae-e06ead922d8d\") " pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.344236 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c509940-e78d-40fd-b5ae-e06ead922d8d-utilities\") pod \"certified-operators-xlnn9\" (UID: \"5c509940-e78d-40fd-b5ae-e06ead922d8d\") " pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.447692 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhn8r\" (UniqueName: \"kubernetes.io/projected/5c509940-e78d-40fd-b5ae-e06ead922d8d-kube-api-access-jhn8r\") pod \"certified-operators-xlnn9\" (UID: \"5c509940-e78d-40fd-b5ae-e06ead922d8d\") " pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.447840 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c509940-e78d-40fd-b5ae-e06ead922d8d-catalog-content\") pod \"certified-operators-xlnn9\" (UID: \"5c509940-e78d-40fd-b5ae-e06ead922d8d\") " pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.447869 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c509940-e78d-40fd-b5ae-e06ead922d8d-utilities\") pod \"certified-operators-xlnn9\" (UID: \"5c509940-e78d-40fd-b5ae-e06ead922d8d\") " pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.448489 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c509940-e78d-40fd-b5ae-e06ead922d8d-utilities\") pod \"certified-operators-xlnn9\" (UID: \"5c509940-e78d-40fd-b5ae-e06ead922d8d\") " pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.448557 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c509940-e78d-40fd-b5ae-e06ead922d8d-catalog-content\") pod \"certified-operators-xlnn9\" (UID: \"5c509940-e78d-40fd-b5ae-e06ead922d8d\") " pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.468531 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhn8r\" (UniqueName: \"kubernetes.io/projected/5c509940-e78d-40fd-b5ae-e06ead922d8d-kube-api-access-jhn8r\") pod \"certified-operators-xlnn9\" (UID: \"5c509940-e78d-40fd-b5ae-e06ead922d8d\") " pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:38:46 crc kubenswrapper[4678]: I1124 12:38:46.583500 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:38:47 crc kubenswrapper[4678]: I1124 12:38:47.432616 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xlnn9"] Nov 24 12:38:47 crc kubenswrapper[4678]: I1124 12:38:47.743564 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlnn9" event={"ID":"5c509940-e78d-40fd-b5ae-e06ead922d8d","Type":"ContainerStarted","Data":"d395575435fb1a636ebebaab1d25b5fba7b7572ab493c064ec9dda71410fe005"} Nov 24 12:38:48 crc kubenswrapper[4678]: I1124 12:38:48.757459 4678 generic.go:334] "Generic (PLEG): container finished" podID="5c509940-e78d-40fd-b5ae-e06ead922d8d" containerID="fb373c2764568a518f10e0fc0d365dd4c43c519ad29560db2f87140c013f43a7" exitCode=0 Nov 24 12:38:48 crc kubenswrapper[4678]: I1124 12:38:48.757817 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlnn9" event={"ID":"5c509940-e78d-40fd-b5ae-e06ead922d8d","Type":"ContainerDied","Data":"fb373c2764568a518f10e0fc0d365dd4c43c519ad29560db2f87140c013f43a7"} Nov 24 12:38:48 crc kubenswrapper[4678]: I1124 12:38:48.762114 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:38:51 crc kubenswrapper[4678]: I1124 12:38:51.792174 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlnn9" event={"ID":"5c509940-e78d-40fd-b5ae-e06ead922d8d","Type":"ContainerStarted","Data":"675db4a0010d446397f24882a1721cc247eaefe8449622d354dab269c7bf16b0"} Nov 24 12:38:54 crc kubenswrapper[4678]: I1124 12:38:54.827287 4678 generic.go:334] "Generic (PLEG): container finished" podID="5c509940-e78d-40fd-b5ae-e06ead922d8d" containerID="675db4a0010d446397f24882a1721cc247eaefe8449622d354dab269c7bf16b0" exitCode=0 Nov 24 12:38:54 crc kubenswrapper[4678]: I1124 12:38:54.827404 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlnn9" event={"ID":"5c509940-e78d-40fd-b5ae-e06ead922d8d","Type":"ContainerDied","Data":"675db4a0010d446397f24882a1721cc247eaefe8449622d354dab269c7bf16b0"} Nov 24 12:38:55 crc kubenswrapper[4678]: I1124 12:38:55.896465 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:38:55 crc kubenswrapper[4678]: E1124 12:38:55.897395 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:38:56 crc kubenswrapper[4678]: I1124 12:38:56.871613 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlnn9" event={"ID":"5c509940-e78d-40fd-b5ae-e06ead922d8d","Type":"ContainerStarted","Data":"230b16cf9cbe920c30c0c6cfdf779075941a73894ac08aaac48f062dcde4b05c"} Nov 24 12:38:56 crc kubenswrapper[4678]: I1124 12:38:56.893362 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xlnn9" podStartSLOduration=3.981215974 podStartE2EDuration="10.8933448s" podCreationTimestamp="2025-11-24 12:38:46 +0000 UTC" firstStartedPulling="2025-11-24 12:38:48.761784696 +0000 UTC m=+4939.692844335" lastFinishedPulling="2025-11-24 12:38:55.673913522 +0000 UTC m=+4946.604973161" observedRunningTime="2025-11-24 12:38:56.888144311 +0000 UTC m=+4947.819203950" watchObservedRunningTime="2025-11-24 12:38:56.8933448 +0000 UTC m=+4947.824404439" Nov 24 12:39:06 crc kubenswrapper[4678]: I1124 12:39:06.584226 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:39:06 crc kubenswrapper[4678]: I1124 12:39:06.586259 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:39:06 crc kubenswrapper[4678]: I1124 12:39:06.635288 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:39:07 crc kubenswrapper[4678]: I1124 12:39:07.091436 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:39:07 crc kubenswrapper[4678]: I1124 12:39:07.146206 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xlnn9"] Nov 24 12:39:07 crc kubenswrapper[4678]: I1124 12:39:07.896278 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:39:07 crc kubenswrapper[4678]: E1124 12:39:07.896876 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:39:09 crc kubenswrapper[4678]: I1124 12:39:09.052021 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xlnn9" podUID="5c509940-e78d-40fd-b5ae-e06ead922d8d" containerName="registry-server" containerID="cri-o://230b16cf9cbe920c30c0c6cfdf779075941a73894ac08aaac48f062dcde4b05c" gracePeriod=2 Nov 24 12:39:10 crc kubenswrapper[4678]: I1124 12:39:10.065656 4678 generic.go:334] "Generic (PLEG): container finished" podID="5c509940-e78d-40fd-b5ae-e06ead922d8d" containerID="230b16cf9cbe920c30c0c6cfdf779075941a73894ac08aaac48f062dcde4b05c" exitCode=0 Nov 24 12:39:10 crc kubenswrapper[4678]: I1124 12:39:10.065724 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlnn9" event={"ID":"5c509940-e78d-40fd-b5ae-e06ead922d8d","Type":"ContainerDied","Data":"230b16cf9cbe920c30c0c6cfdf779075941a73894ac08aaac48f062dcde4b05c"} Nov 24 12:39:10 crc kubenswrapper[4678]: I1124 12:39:10.066378 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlnn9" event={"ID":"5c509940-e78d-40fd-b5ae-e06ead922d8d","Type":"ContainerDied","Data":"d395575435fb1a636ebebaab1d25b5fba7b7572ab493c064ec9dda71410fe005"} Nov 24 12:39:10 crc kubenswrapper[4678]: I1124 12:39:10.066402 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d395575435fb1a636ebebaab1d25b5fba7b7572ab493c064ec9dda71410fe005" Nov 24 12:39:10 crc kubenswrapper[4678]: I1124 12:39:10.167690 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:39:10 crc kubenswrapper[4678]: I1124 12:39:10.208311 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhn8r\" (UniqueName: \"kubernetes.io/projected/5c509940-e78d-40fd-b5ae-e06ead922d8d-kube-api-access-jhn8r\") pod \"5c509940-e78d-40fd-b5ae-e06ead922d8d\" (UID: \"5c509940-e78d-40fd-b5ae-e06ead922d8d\") " Nov 24 12:39:10 crc kubenswrapper[4678]: I1124 12:39:10.208429 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c509940-e78d-40fd-b5ae-e06ead922d8d-catalog-content\") pod \"5c509940-e78d-40fd-b5ae-e06ead922d8d\" (UID: \"5c509940-e78d-40fd-b5ae-e06ead922d8d\") " Nov 24 12:39:10 crc kubenswrapper[4678]: I1124 12:39:10.208591 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c509940-e78d-40fd-b5ae-e06ead922d8d-utilities\") pod \"5c509940-e78d-40fd-b5ae-e06ead922d8d\" (UID: \"5c509940-e78d-40fd-b5ae-e06ead922d8d\") " Nov 24 12:39:10 crc kubenswrapper[4678]: I1124 12:39:10.210099 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c509940-e78d-40fd-b5ae-e06ead922d8d-utilities" (OuterVolumeSpecName: "utilities") pod "5c509940-e78d-40fd-b5ae-e06ead922d8d" (UID: "5c509940-e78d-40fd-b5ae-e06ead922d8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:39:10 crc kubenswrapper[4678]: I1124 12:39:10.217257 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c509940-e78d-40fd-b5ae-e06ead922d8d-kube-api-access-jhn8r" (OuterVolumeSpecName: "kube-api-access-jhn8r") pod "5c509940-e78d-40fd-b5ae-e06ead922d8d" (UID: "5c509940-e78d-40fd-b5ae-e06ead922d8d"). InnerVolumeSpecName "kube-api-access-jhn8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:39:10 crc kubenswrapper[4678]: I1124 12:39:10.267609 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c509940-e78d-40fd-b5ae-e06ead922d8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c509940-e78d-40fd-b5ae-e06ead922d8d" (UID: "5c509940-e78d-40fd-b5ae-e06ead922d8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:39:10 crc kubenswrapper[4678]: I1124 12:39:10.311216 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c509940-e78d-40fd-b5ae-e06ead922d8d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:10 crc kubenswrapper[4678]: I1124 12:39:10.311253 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c509940-e78d-40fd-b5ae-e06ead922d8d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:10 crc kubenswrapper[4678]: I1124 12:39:10.311266 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhn8r\" (UniqueName: \"kubernetes.io/projected/5c509940-e78d-40fd-b5ae-e06ead922d8d-kube-api-access-jhn8r\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:11 crc kubenswrapper[4678]: I1124 12:39:11.076034 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xlnn9" Nov 24 12:39:11 crc kubenswrapper[4678]: I1124 12:39:11.126530 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xlnn9"] Nov 24 12:39:11 crc kubenswrapper[4678]: I1124 12:39:11.139190 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xlnn9"] Nov 24 12:39:11 crc kubenswrapper[4678]: I1124 12:39:11.910897 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c509940-e78d-40fd-b5ae-e06ead922d8d" path="/var/lib/kubelet/pods/5c509940-e78d-40fd-b5ae-e06ead922d8d/volumes" Nov 24 12:39:22 crc kubenswrapper[4678]: I1124 12:39:22.895931 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:39:22 crc kubenswrapper[4678]: E1124 12:39:22.897038 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:39:36 crc kubenswrapper[4678]: I1124 12:39:36.897826 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:39:36 crc kubenswrapper[4678]: E1124 12:39:36.898755 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:39:47 crc kubenswrapper[4678]: I1124 12:39:47.896268 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:39:47 crc kubenswrapper[4678]: E1124 12:39:47.897238 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:39:58 crc kubenswrapper[4678]: I1124 12:39:58.897443 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:39:58 crc kubenswrapper[4678]: E1124 12:39:58.898370 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:40:12 crc kubenswrapper[4678]: I1124 12:40:12.896950 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:40:12 crc kubenswrapper[4678]: E1124 12:40:12.898167 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:40:24 crc kubenswrapper[4678]: I1124 12:40:24.895484 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:40:24 crc kubenswrapper[4678]: E1124 12:40:24.896393 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:40:35 crc kubenswrapper[4678]: I1124 12:40:35.896528 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:40:35 crc kubenswrapper[4678]: E1124 12:40:35.897418 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:40:46 crc kubenswrapper[4678]: I1124 12:40:46.896435 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:40:46 crc kubenswrapper[4678]: E1124 12:40:46.897811 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:41:00 crc kubenswrapper[4678]: I1124 12:41:00.896647 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:41:00 crc kubenswrapper[4678]: E1124 12:41:00.897687 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:41:13 crc kubenswrapper[4678]: I1124 12:41:13.900567 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:41:13 crc kubenswrapper[4678]: E1124 12:41:13.901549 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:41:28 crc kubenswrapper[4678]: I1124 12:41:28.895853 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:41:28 crc kubenswrapper[4678]: E1124 12:41:28.897082 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:41:40 crc kubenswrapper[4678]: I1124 12:41:40.896507 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:41:40 crc kubenswrapper[4678]: E1124 12:41:40.897732 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:41:55 crc kubenswrapper[4678]: I1124 12:41:55.896749 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:41:55 crc kubenswrapper[4678]: E1124 12:41:55.897726 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:42:10 crc kubenswrapper[4678]: I1124 12:42:10.896461 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:42:10 crc kubenswrapper[4678]: E1124 12:42:10.897647 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:42:22 crc kubenswrapper[4678]: I1124 12:42:22.896720 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:42:22 crc kubenswrapper[4678]: E1124 12:42:22.898041 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:42:34 crc kubenswrapper[4678]: I1124 12:42:34.896921 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:42:34 crc kubenswrapper[4678]: E1124 12:42:34.897872 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:42:48 crc kubenswrapper[4678]: I1124 12:42:48.895971 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:42:48 crc kubenswrapper[4678]: E1124 12:42:48.897392 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:43:02 crc kubenswrapper[4678]: I1124 12:43:02.895816 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:43:02 crc kubenswrapper[4678]: E1124 12:43:02.908128 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:43:16 crc kubenswrapper[4678]: I1124 12:43:16.896598 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:43:16 crc kubenswrapper[4678]: E1124 12:43:16.897463 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:43:27 crc kubenswrapper[4678]: I1124 12:43:27.896211 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:43:27 crc kubenswrapper[4678]: E1124 12:43:27.897232 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:43:41 crc kubenswrapper[4678]: I1124 12:43:41.895611 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:43:42 crc kubenswrapper[4678]: I1124 12:43:42.781980 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"75a7c126087d7d1ddf3a04fd019fd0506ed9d0cf3acde60906561fca1eb78321"} Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.161335 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h5klc"] Nov 24 12:43:44 crc kubenswrapper[4678]: E1124 12:43:44.162451 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c509940-e78d-40fd-b5ae-e06ead922d8d" containerName="extract-utilities" Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.162465 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c509940-e78d-40fd-b5ae-e06ead922d8d" containerName="extract-utilities" Nov 24 12:43:44 crc kubenswrapper[4678]: E1124 12:43:44.162485 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c509940-e78d-40fd-b5ae-e06ead922d8d" containerName="registry-server" Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.162491 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c509940-e78d-40fd-b5ae-e06ead922d8d" containerName="registry-server" Nov 24 12:43:44 crc kubenswrapper[4678]: E1124 12:43:44.162535 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c509940-e78d-40fd-b5ae-e06ead922d8d" containerName="extract-content" Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.162543 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c509940-e78d-40fd-b5ae-e06ead922d8d" containerName="extract-content" Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.162849 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c509940-e78d-40fd-b5ae-e06ead922d8d" containerName="registry-server" Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.164826 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.190132 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5klc"] Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.200584 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e276f919-45f6-478e-80e8-3295f61436d9-catalog-content\") pod \"redhat-marketplace-h5klc\" (UID: \"e276f919-45f6-478e-80e8-3295f61436d9\") " pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.200902 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6p28\" (UniqueName: \"kubernetes.io/projected/e276f919-45f6-478e-80e8-3295f61436d9-kube-api-access-r6p28\") pod \"redhat-marketplace-h5klc\" (UID: \"e276f919-45f6-478e-80e8-3295f61436d9\") " pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.200959 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e276f919-45f6-478e-80e8-3295f61436d9-utilities\") pod \"redhat-marketplace-h5klc\" (UID: \"e276f919-45f6-478e-80e8-3295f61436d9\") " pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.302153 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6p28\" (UniqueName: \"kubernetes.io/projected/e276f919-45f6-478e-80e8-3295f61436d9-kube-api-access-r6p28\") pod \"redhat-marketplace-h5klc\" (UID: \"e276f919-45f6-478e-80e8-3295f61436d9\") " pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.302498 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e276f919-45f6-478e-80e8-3295f61436d9-utilities\") pod \"redhat-marketplace-h5klc\" (UID: \"e276f919-45f6-478e-80e8-3295f61436d9\") " pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.302604 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e276f919-45f6-478e-80e8-3295f61436d9-catalog-content\") pod \"redhat-marketplace-h5klc\" (UID: \"e276f919-45f6-478e-80e8-3295f61436d9\") " pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.303361 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e276f919-45f6-478e-80e8-3295f61436d9-catalog-content\") pod \"redhat-marketplace-h5klc\" (UID: \"e276f919-45f6-478e-80e8-3295f61436d9\") " pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.304087 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e276f919-45f6-478e-80e8-3295f61436d9-utilities\") pod \"redhat-marketplace-h5klc\" (UID: \"e276f919-45f6-478e-80e8-3295f61436d9\") " pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.322361 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6p28\" (UniqueName: \"kubernetes.io/projected/e276f919-45f6-478e-80e8-3295f61436d9-kube-api-access-r6p28\") pod \"redhat-marketplace-h5klc\" (UID: \"e276f919-45f6-478e-80e8-3295f61436d9\") " pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:44 crc kubenswrapper[4678]: I1124 12:43:44.491341 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:45 crc kubenswrapper[4678]: I1124 12:43:45.089461 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5klc"] Nov 24 12:43:45 crc kubenswrapper[4678]: I1124 12:43:45.819441 4678 generic.go:334] "Generic (PLEG): container finished" podID="e276f919-45f6-478e-80e8-3295f61436d9" containerID="814612ac35faca0edcd81da29efd48bbef16ee48e7e5be85c78ec1b6afe5788b" exitCode=0 Nov 24 12:43:45 crc kubenswrapper[4678]: I1124 12:43:45.819540 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5klc" event={"ID":"e276f919-45f6-478e-80e8-3295f61436d9","Type":"ContainerDied","Data":"814612ac35faca0edcd81da29efd48bbef16ee48e7e5be85c78ec1b6afe5788b"} Nov 24 12:43:45 crc kubenswrapper[4678]: I1124 12:43:45.819775 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5klc" event={"ID":"e276f919-45f6-478e-80e8-3295f61436d9","Type":"ContainerStarted","Data":"a7347463c8118360acd429c37003f7fe6b30e1f4851f62dd7a3976c9e2f1d276"} Nov 24 12:43:46 crc kubenswrapper[4678]: I1124 12:43:46.838073 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5klc" event={"ID":"e276f919-45f6-478e-80e8-3295f61436d9","Type":"ContainerStarted","Data":"afdaf8a0b9046a52e82bec072c54c9077dbb7875fe226b49c25d880e86345af7"} Nov 24 12:43:47 crc kubenswrapper[4678]: I1124 12:43:47.849290 4678 generic.go:334] "Generic (PLEG): container finished" podID="e276f919-45f6-478e-80e8-3295f61436d9" containerID="afdaf8a0b9046a52e82bec072c54c9077dbb7875fe226b49c25d880e86345af7" exitCode=0 Nov 24 12:43:47 crc kubenswrapper[4678]: I1124 12:43:47.849369 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5klc" event={"ID":"e276f919-45f6-478e-80e8-3295f61436d9","Type":"ContainerDied","Data":"afdaf8a0b9046a52e82bec072c54c9077dbb7875fe226b49c25d880e86345af7"} Nov 24 12:43:48 crc kubenswrapper[4678]: I1124 12:43:48.864842 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5klc" event={"ID":"e276f919-45f6-478e-80e8-3295f61436d9","Type":"ContainerStarted","Data":"de7aa4665fef4609e245352bc8191ddc2162e36da80ce21fc34b09b38d30673e"} Nov 24 12:43:48 crc kubenswrapper[4678]: I1124 12:43:48.901423 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h5klc" podStartSLOduration=2.495180974 podStartE2EDuration="4.90139892s" podCreationTimestamp="2025-11-24 12:43:44 +0000 UTC" firstStartedPulling="2025-11-24 12:43:45.823036196 +0000 UTC m=+5236.754095835" lastFinishedPulling="2025-11-24 12:43:48.229254152 +0000 UTC m=+5239.160313781" observedRunningTime="2025-11-24 12:43:48.895740099 +0000 UTC m=+5239.826799738" watchObservedRunningTime="2025-11-24 12:43:48.90139892 +0000 UTC m=+5239.832458559" Nov 24 12:43:54 crc kubenswrapper[4678]: I1124 12:43:54.492340 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:54 crc kubenswrapper[4678]: I1124 12:43:54.493621 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:54 crc kubenswrapper[4678]: I1124 12:43:54.547755 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:54 crc kubenswrapper[4678]: I1124 12:43:54.972873 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:55 crc kubenswrapper[4678]: I1124 12:43:55.022006 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5klc"] Nov 24 12:43:56 crc kubenswrapper[4678]: I1124 12:43:56.945346 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h5klc" podUID="e276f919-45f6-478e-80e8-3295f61436d9" containerName="registry-server" containerID="cri-o://de7aa4665fef4609e245352bc8191ddc2162e36da80ce21fc34b09b38d30673e" gracePeriod=2 Nov 24 12:43:57 crc kubenswrapper[4678]: I1124 12:43:57.511262 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:57 crc kubenswrapper[4678]: I1124 12:43:57.632909 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e276f919-45f6-478e-80e8-3295f61436d9-catalog-content\") pod \"e276f919-45f6-478e-80e8-3295f61436d9\" (UID: \"e276f919-45f6-478e-80e8-3295f61436d9\") " Nov 24 12:43:57 crc kubenswrapper[4678]: I1124 12:43:57.633157 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6p28\" (UniqueName: \"kubernetes.io/projected/e276f919-45f6-478e-80e8-3295f61436d9-kube-api-access-r6p28\") pod \"e276f919-45f6-478e-80e8-3295f61436d9\" (UID: \"e276f919-45f6-478e-80e8-3295f61436d9\") " Nov 24 12:43:57 crc kubenswrapper[4678]: I1124 12:43:57.633261 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e276f919-45f6-478e-80e8-3295f61436d9-utilities\") pod \"e276f919-45f6-478e-80e8-3295f61436d9\" (UID: \"e276f919-45f6-478e-80e8-3295f61436d9\") " Nov 24 12:43:57 crc kubenswrapper[4678]: I1124 12:43:57.634078 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e276f919-45f6-478e-80e8-3295f61436d9-utilities" (OuterVolumeSpecName: "utilities") pod "e276f919-45f6-478e-80e8-3295f61436d9" (UID: "e276f919-45f6-478e-80e8-3295f61436d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:43:57 crc kubenswrapper[4678]: I1124 12:43:57.638962 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e276f919-45f6-478e-80e8-3295f61436d9-kube-api-access-r6p28" (OuterVolumeSpecName: "kube-api-access-r6p28") pod "e276f919-45f6-478e-80e8-3295f61436d9" (UID: "e276f919-45f6-478e-80e8-3295f61436d9"). InnerVolumeSpecName "kube-api-access-r6p28". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:57 crc kubenswrapper[4678]: I1124 12:43:57.651723 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e276f919-45f6-478e-80e8-3295f61436d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e276f919-45f6-478e-80e8-3295f61436d9" (UID: "e276f919-45f6-478e-80e8-3295f61436d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:43:57 crc kubenswrapper[4678]: I1124 12:43:57.735500 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e276f919-45f6-478e-80e8-3295f61436d9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:57 crc kubenswrapper[4678]: I1124 12:43:57.735537 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6p28\" (UniqueName: \"kubernetes.io/projected/e276f919-45f6-478e-80e8-3295f61436d9-kube-api-access-r6p28\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:57 crc kubenswrapper[4678]: I1124 12:43:57.735551 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e276f919-45f6-478e-80e8-3295f61436d9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:57 crc kubenswrapper[4678]: I1124 12:43:57.959798 4678 generic.go:334] "Generic (PLEG): container finished" podID="e276f919-45f6-478e-80e8-3295f61436d9" containerID="de7aa4665fef4609e245352bc8191ddc2162e36da80ce21fc34b09b38d30673e" exitCode=0 Nov 24 12:43:57 crc kubenswrapper[4678]: I1124 12:43:57.959857 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5klc" event={"ID":"e276f919-45f6-478e-80e8-3295f61436d9","Type":"ContainerDied","Data":"de7aa4665fef4609e245352bc8191ddc2162e36da80ce21fc34b09b38d30673e"} Nov 24 12:43:57 crc kubenswrapper[4678]: I1124 12:43:57.959935 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5klc" event={"ID":"e276f919-45f6-478e-80e8-3295f61436d9","Type":"ContainerDied","Data":"a7347463c8118360acd429c37003f7fe6b30e1f4851f62dd7a3976c9e2f1d276"} Nov 24 12:43:57 crc kubenswrapper[4678]: I1124 12:43:57.959958 4678 scope.go:117] "RemoveContainer" containerID="de7aa4665fef4609e245352bc8191ddc2162e36da80ce21fc34b09b38d30673e" Nov 24 12:43:57 crc kubenswrapper[4678]: I1124 12:43:57.960219 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h5klc" Nov 24 12:43:57 crc kubenswrapper[4678]: I1124 12:43:57.998491 4678 scope.go:117] "RemoveContainer" containerID="afdaf8a0b9046a52e82bec072c54c9077dbb7875fe226b49c25d880e86345af7" Nov 24 12:43:58 crc kubenswrapper[4678]: I1124 12:43:58.007003 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5klc"] Nov 24 12:43:58 crc kubenswrapper[4678]: I1124 12:43:58.033313 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5klc"] Nov 24 12:43:58 crc kubenswrapper[4678]: I1124 12:43:58.037804 4678 scope.go:117] "RemoveContainer" containerID="814612ac35faca0edcd81da29efd48bbef16ee48e7e5be85c78ec1b6afe5788b" Nov 24 12:43:58 crc kubenswrapper[4678]: I1124 12:43:58.093966 4678 scope.go:117] "RemoveContainer" containerID="de7aa4665fef4609e245352bc8191ddc2162e36da80ce21fc34b09b38d30673e" Nov 24 12:43:58 crc kubenswrapper[4678]: E1124 12:43:58.094468 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de7aa4665fef4609e245352bc8191ddc2162e36da80ce21fc34b09b38d30673e\": container with ID starting with de7aa4665fef4609e245352bc8191ddc2162e36da80ce21fc34b09b38d30673e not found: ID does not exist" containerID="de7aa4665fef4609e245352bc8191ddc2162e36da80ce21fc34b09b38d30673e" Nov 24 12:43:58 crc kubenswrapper[4678]: I1124 12:43:58.094509 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de7aa4665fef4609e245352bc8191ddc2162e36da80ce21fc34b09b38d30673e"} err="failed to get container status \"de7aa4665fef4609e245352bc8191ddc2162e36da80ce21fc34b09b38d30673e\": rpc error: code = NotFound desc = could not find container \"de7aa4665fef4609e245352bc8191ddc2162e36da80ce21fc34b09b38d30673e\": container with ID starting with de7aa4665fef4609e245352bc8191ddc2162e36da80ce21fc34b09b38d30673e not found: ID does not exist" Nov 24 12:43:58 crc kubenswrapper[4678]: I1124 12:43:58.094532 4678 scope.go:117] "RemoveContainer" containerID="afdaf8a0b9046a52e82bec072c54c9077dbb7875fe226b49c25d880e86345af7" Nov 24 12:43:58 crc kubenswrapper[4678]: E1124 12:43:58.094870 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afdaf8a0b9046a52e82bec072c54c9077dbb7875fe226b49c25d880e86345af7\": container with ID starting with afdaf8a0b9046a52e82bec072c54c9077dbb7875fe226b49c25d880e86345af7 not found: ID does not exist" containerID="afdaf8a0b9046a52e82bec072c54c9077dbb7875fe226b49c25d880e86345af7" Nov 24 12:43:58 crc kubenswrapper[4678]: I1124 12:43:58.094892 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afdaf8a0b9046a52e82bec072c54c9077dbb7875fe226b49c25d880e86345af7"} err="failed to get container status \"afdaf8a0b9046a52e82bec072c54c9077dbb7875fe226b49c25d880e86345af7\": rpc error: code = NotFound desc = could not find container \"afdaf8a0b9046a52e82bec072c54c9077dbb7875fe226b49c25d880e86345af7\": container with ID starting with afdaf8a0b9046a52e82bec072c54c9077dbb7875fe226b49c25d880e86345af7 not found: ID does not exist" Nov 24 12:43:58 crc kubenswrapper[4678]: I1124 12:43:58.094905 4678 scope.go:117] "RemoveContainer" containerID="814612ac35faca0edcd81da29efd48bbef16ee48e7e5be85c78ec1b6afe5788b" Nov 24 12:43:58 crc kubenswrapper[4678]: E1124 12:43:58.095176 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"814612ac35faca0edcd81da29efd48bbef16ee48e7e5be85c78ec1b6afe5788b\": container with ID starting with 814612ac35faca0edcd81da29efd48bbef16ee48e7e5be85c78ec1b6afe5788b not found: ID does not exist" containerID="814612ac35faca0edcd81da29efd48bbef16ee48e7e5be85c78ec1b6afe5788b" Nov 24 12:43:58 crc kubenswrapper[4678]: I1124 12:43:58.095198 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"814612ac35faca0edcd81da29efd48bbef16ee48e7e5be85c78ec1b6afe5788b"} err="failed to get container status \"814612ac35faca0edcd81da29efd48bbef16ee48e7e5be85c78ec1b6afe5788b\": rpc error: code = NotFound desc = could not find container \"814612ac35faca0edcd81da29efd48bbef16ee48e7e5be85c78ec1b6afe5788b\": container with ID starting with 814612ac35faca0edcd81da29efd48bbef16ee48e7e5be85c78ec1b6afe5788b not found: ID does not exist" Nov 24 12:43:59 crc kubenswrapper[4678]: I1124 12:43:59.909144 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e276f919-45f6-478e-80e8-3295f61436d9" path="/var/lib/kubelet/pods/e276f919-45f6-478e-80e8-3295f61436d9/volumes" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.005372 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lmdm7"] Nov 24 12:44:45 crc kubenswrapper[4678]: E1124 12:44:45.007173 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e276f919-45f6-478e-80e8-3295f61436d9" containerName="extract-content" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.007194 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e276f919-45f6-478e-80e8-3295f61436d9" containerName="extract-content" Nov 24 12:44:45 crc kubenswrapper[4678]: E1124 12:44:45.007239 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e276f919-45f6-478e-80e8-3295f61436d9" containerName="registry-server" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.007246 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e276f919-45f6-478e-80e8-3295f61436d9" containerName="registry-server" Nov 24 12:44:45 crc kubenswrapper[4678]: E1124 12:44:45.007261 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e276f919-45f6-478e-80e8-3295f61436d9" containerName="extract-utilities" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.007268 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e276f919-45f6-478e-80e8-3295f61436d9" containerName="extract-utilities" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.007607 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="e276f919-45f6-478e-80e8-3295f61436d9" containerName="registry-server" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.010231 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmdm7" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.022946 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lmdm7"] Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.190563 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfvgg\" (UniqueName: \"kubernetes.io/projected/4686ce94-5321-49ca-b107-3f9e755495a8-kube-api-access-xfvgg\") pod \"community-operators-lmdm7\" (UID: \"4686ce94-5321-49ca-b107-3f9e755495a8\") " pod="openshift-marketplace/community-operators-lmdm7" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.190645 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4686ce94-5321-49ca-b107-3f9e755495a8-catalog-content\") pod \"community-operators-lmdm7\" (UID: \"4686ce94-5321-49ca-b107-3f9e755495a8\") " pod="openshift-marketplace/community-operators-lmdm7" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.190758 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4686ce94-5321-49ca-b107-3f9e755495a8-utilities\") pod \"community-operators-lmdm7\" (UID: \"4686ce94-5321-49ca-b107-3f9e755495a8\") " pod="openshift-marketplace/community-operators-lmdm7" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.292775 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfvgg\" (UniqueName: \"kubernetes.io/projected/4686ce94-5321-49ca-b107-3f9e755495a8-kube-api-access-xfvgg\") pod \"community-operators-lmdm7\" (UID: \"4686ce94-5321-49ca-b107-3f9e755495a8\") " pod="openshift-marketplace/community-operators-lmdm7" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.292853 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4686ce94-5321-49ca-b107-3f9e755495a8-catalog-content\") pod \"community-operators-lmdm7\" (UID: \"4686ce94-5321-49ca-b107-3f9e755495a8\") " pod="openshift-marketplace/community-operators-lmdm7" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.292885 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4686ce94-5321-49ca-b107-3f9e755495a8-utilities\") pod \"community-operators-lmdm7\" (UID: \"4686ce94-5321-49ca-b107-3f9e755495a8\") " pod="openshift-marketplace/community-operators-lmdm7" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.293505 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4686ce94-5321-49ca-b107-3f9e755495a8-catalog-content\") pod \"community-operators-lmdm7\" (UID: \"4686ce94-5321-49ca-b107-3f9e755495a8\") " pod="openshift-marketplace/community-operators-lmdm7" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.293517 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4686ce94-5321-49ca-b107-3f9e755495a8-utilities\") pod \"community-operators-lmdm7\" (UID: \"4686ce94-5321-49ca-b107-3f9e755495a8\") " pod="openshift-marketplace/community-operators-lmdm7" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.316563 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfvgg\" (UniqueName: \"kubernetes.io/projected/4686ce94-5321-49ca-b107-3f9e755495a8-kube-api-access-xfvgg\") pod \"community-operators-lmdm7\" (UID: \"4686ce94-5321-49ca-b107-3f9e755495a8\") " pod="openshift-marketplace/community-operators-lmdm7" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.344605 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmdm7" Nov 24 12:44:45 crc kubenswrapper[4678]: I1124 12:44:45.910590 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lmdm7"] Nov 24 12:44:46 crc kubenswrapper[4678]: I1124 12:44:46.494565 4678 generic.go:334] "Generic (PLEG): container finished" podID="4686ce94-5321-49ca-b107-3f9e755495a8" containerID="cb87f04db05bcd8656a258c7c96c0b39d547fde94b57957bbb9df24b6530180f" exitCode=0 Nov 24 12:44:46 crc kubenswrapper[4678]: I1124 12:44:46.494625 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmdm7" event={"ID":"4686ce94-5321-49ca-b107-3f9e755495a8","Type":"ContainerDied","Data":"cb87f04db05bcd8656a258c7c96c0b39d547fde94b57957bbb9df24b6530180f"} Nov 24 12:44:46 crc kubenswrapper[4678]: I1124 12:44:46.495098 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmdm7" event={"ID":"4686ce94-5321-49ca-b107-3f9e755495a8","Type":"ContainerStarted","Data":"2278a319e2118c0d4a173290620a7cebfd79116ff6d2ece433584b7c7b3b1318"} Nov 24 12:44:46 crc kubenswrapper[4678]: I1124 12:44:46.497309 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:44:51 crc kubenswrapper[4678]: I1124 12:44:51.571540 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmdm7" event={"ID":"4686ce94-5321-49ca-b107-3f9e755495a8","Type":"ContainerStarted","Data":"dc8a7e1fe893d95f9c3f8c8f92ec885382bb715c48fad97771f2218c03c83187"} Nov 24 12:44:52 crc kubenswrapper[4678]: I1124 12:44:52.583470 4678 generic.go:334] "Generic (PLEG): container finished" podID="4686ce94-5321-49ca-b107-3f9e755495a8" containerID="dc8a7e1fe893d95f9c3f8c8f92ec885382bb715c48fad97771f2218c03c83187" exitCode=0 Nov 24 12:44:52 crc kubenswrapper[4678]: I1124 12:44:52.583729 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmdm7" event={"ID":"4686ce94-5321-49ca-b107-3f9e755495a8","Type":"ContainerDied","Data":"dc8a7e1fe893d95f9c3f8c8f92ec885382bb715c48fad97771f2218c03c83187"} Nov 24 12:44:53 crc kubenswrapper[4678]: I1124 12:44:53.595739 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmdm7" event={"ID":"4686ce94-5321-49ca-b107-3f9e755495a8","Type":"ContainerStarted","Data":"e2accf3248abc3a96a5af06a94869c635df97b3d792182e1a8872820237de97a"} Nov 24 12:44:53 crc kubenswrapper[4678]: I1124 12:44:53.620974 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lmdm7" podStartSLOduration=3.153827529 podStartE2EDuration="9.620955425s" podCreationTimestamp="2025-11-24 12:44:44 +0000 UTC" firstStartedPulling="2025-11-24 12:44:46.497048182 +0000 UTC m=+5297.428107821" lastFinishedPulling="2025-11-24 12:44:52.964176078 +0000 UTC m=+5303.895235717" observedRunningTime="2025-11-24 12:44:53.612232051 +0000 UTC m=+5304.543291690" watchObservedRunningTime="2025-11-24 12:44:53.620955425 +0000 UTC m=+5304.552015064" Nov 24 12:44:55 crc kubenswrapper[4678]: I1124 12:44:55.345003 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lmdm7" Nov 24 12:44:55 crc kubenswrapper[4678]: I1124 12:44:55.345544 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lmdm7" Nov 24 12:44:55 crc kubenswrapper[4678]: I1124 12:44:55.401034 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lmdm7" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.415150 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.418119 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.420364 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.420458 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.420575 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fghgg" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.423100 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.445261 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.544718 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.544890 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.545064 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fa52a8b5-88fb-4f22-b067-edbdcee003ea-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.545099 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa52a8b5-88fb-4f22-b067-edbdcee003ea-config-data\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.545164 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.545533 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v8rg\" (UniqueName: \"kubernetes.io/projected/fa52a8b5-88fb-4f22-b067-edbdcee003ea-kube-api-access-5v8rg\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.545748 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fa52a8b5-88fb-4f22-b067-edbdcee003ea-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.545888 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa52a8b5-88fb-4f22-b067-edbdcee003ea-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.545969 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.648124 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v8rg\" (UniqueName: \"kubernetes.io/projected/fa52a8b5-88fb-4f22-b067-edbdcee003ea-kube-api-access-5v8rg\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.648286 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fa52a8b5-88fb-4f22-b067-edbdcee003ea-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.648329 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa52a8b5-88fb-4f22-b067-edbdcee003ea-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.648360 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.648446 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.648496 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.648533 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa52a8b5-88fb-4f22-b067-edbdcee003ea-config-data\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.648595 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fa52a8b5-88fb-4f22-b067-edbdcee003ea-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.648773 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.649462 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa52a8b5-88fb-4f22-b067-edbdcee003ea-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.649027 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fa52a8b5-88fb-4f22-b067-edbdcee003ea-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.648819 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fa52a8b5-88fb-4f22-b067-edbdcee003ea-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.649798 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa52a8b5-88fb-4f22-b067-edbdcee003ea-config-data\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.652616 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.656606 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.657153 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.658032 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.665255 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v8rg\" (UniqueName: \"kubernetes.io/projected/fa52a8b5-88fb-4f22-b067-edbdcee003ea-kube-api-access-5v8rg\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.692625 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " pod="openstack/tempest-tests-tempest" Nov 24 12:44:59 crc kubenswrapper[4678]: I1124 12:44:59.744221 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.152221 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx"] Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.155030 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.159162 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.161105 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.165594 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx"] Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.265236 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-config-volume\") pod \"collect-profiles-29399805-bqpwx\" (UID: \"d8842530-6e2e-4c6c-8d7f-c32867f3faa2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.265928 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvlvv\" (UniqueName: \"kubernetes.io/projected/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-kube-api-access-jvlvv\") pod \"collect-profiles-29399805-bqpwx\" (UID: \"d8842530-6e2e-4c6c-8d7f-c32867f3faa2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.266296 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-secret-volume\") pod \"collect-profiles-29399805-bqpwx\" (UID: \"d8842530-6e2e-4c6c-8d7f-c32867f3faa2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.368641 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-secret-volume\") pod \"collect-profiles-29399805-bqpwx\" (UID: \"d8842530-6e2e-4c6c-8d7f-c32867f3faa2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.368773 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-config-volume\") pod \"collect-profiles-29399805-bqpwx\" (UID: \"d8842530-6e2e-4c6c-8d7f-c32867f3faa2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.368863 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvlvv\" (UniqueName: \"kubernetes.io/projected/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-kube-api-access-jvlvv\") pod \"collect-profiles-29399805-bqpwx\" (UID: \"d8842530-6e2e-4c6c-8d7f-c32867f3faa2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.370430 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-config-volume\") pod \"collect-profiles-29399805-bqpwx\" (UID: \"d8842530-6e2e-4c6c-8d7f-c32867f3faa2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.375867 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-secret-volume\") pod \"collect-profiles-29399805-bqpwx\" (UID: \"d8842530-6e2e-4c6c-8d7f-c32867f3faa2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.388577 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvlvv\" (UniqueName: \"kubernetes.io/projected/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-kube-api-access-jvlvv\") pod \"collect-profiles-29399805-bqpwx\" (UID: \"d8842530-6e2e-4c6c-8d7f-c32867f3faa2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.414664 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 12:45:00 crc kubenswrapper[4678]: W1124 12:45:00.423274 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa52a8b5_88fb_4f22_b067_edbdcee003ea.slice/crio-4e27ceeb04a9cc39a0a72f2438eea255cff1dc74118105a7ca4c5aa5c281629a WatchSource:0}: Error finding container 4e27ceeb04a9cc39a0a72f2438eea255cff1dc74118105a7ca4c5aa5c281629a: Status 404 returned error can't find the container with id 4e27ceeb04a9cc39a0a72f2438eea255cff1dc74118105a7ca4c5aa5c281629a Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.493265 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.671902 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fa52a8b5-88fb-4f22-b067-edbdcee003ea","Type":"ContainerStarted","Data":"4e27ceeb04a9cc39a0a72f2438eea255cff1dc74118105a7ca4c5aa5c281629a"} Nov 24 12:45:00 crc kubenswrapper[4678]: I1124 12:45:00.948422 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx"] Nov 24 12:45:00 crc kubenswrapper[4678]: W1124 12:45:00.953684 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8842530_6e2e_4c6c_8d7f_c32867f3faa2.slice/crio-a7a36d3cd9e796520dcee2350cbf35fa5a9ca556adb2e0225181e8eddca16d26 WatchSource:0}: Error finding container a7a36d3cd9e796520dcee2350cbf35fa5a9ca556adb2e0225181e8eddca16d26: Status 404 returned error can't find the container with id a7a36d3cd9e796520dcee2350cbf35fa5a9ca556adb2e0225181e8eddca16d26 Nov 24 12:45:01 crc kubenswrapper[4678]: I1124 12:45:01.685628 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" event={"ID":"d8842530-6e2e-4c6c-8d7f-c32867f3faa2","Type":"ContainerStarted","Data":"4ea6e8b2f3458fa60e9f21cd92613cde68c99c468495c4fedb02df3b8c3b6603"} Nov 24 12:45:01 crc kubenswrapper[4678]: I1124 12:45:01.686915 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" event={"ID":"d8842530-6e2e-4c6c-8d7f-c32867f3faa2","Type":"ContainerStarted","Data":"a7a36d3cd9e796520dcee2350cbf35fa5a9ca556adb2e0225181e8eddca16d26"} Nov 24 12:45:01 crc kubenswrapper[4678]: I1124 12:45:01.711767 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" podStartSLOduration=1.711745533 podStartE2EDuration="1.711745533s" podCreationTimestamp="2025-11-24 12:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:01.699455084 +0000 UTC m=+5312.630539464" watchObservedRunningTime="2025-11-24 12:45:01.711745533 +0000 UTC m=+5312.642805172" Nov 24 12:45:02 crc kubenswrapper[4678]: I1124 12:45:02.698097 4678 generic.go:334] "Generic (PLEG): container finished" podID="d8842530-6e2e-4c6c-8d7f-c32867f3faa2" containerID="4ea6e8b2f3458fa60e9f21cd92613cde68c99c468495c4fedb02df3b8c3b6603" exitCode=0 Nov 24 12:45:02 crc kubenswrapper[4678]: I1124 12:45:02.698278 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" event={"ID":"d8842530-6e2e-4c6c-8d7f-c32867f3faa2","Type":"ContainerDied","Data":"4ea6e8b2f3458fa60e9f21cd92613cde68c99c468495c4fedb02df3b8c3b6603"} Nov 24 12:45:05 crc kubenswrapper[4678]: I1124 12:45:05.404264 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lmdm7" Nov 24 12:45:05 crc kubenswrapper[4678]: I1124 12:45:05.506780 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lmdm7"] Nov 24 12:45:05 crc kubenswrapper[4678]: I1124 12:45:05.595278 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4k52z"] Nov 24 12:45:05 crc kubenswrapper[4678]: I1124 12:45:05.595611 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4k52z" podUID="92e69f8c-3e27-40e9-9745-58c570b67749" containerName="registry-server" containerID="cri-o://0bca8608b767b68e3ee95b94418253bd50a1d78623f05cbdd3c5b36dcfa75f49" gracePeriod=2 Nov 24 12:45:06 crc kubenswrapper[4678]: I1124 12:45:06.766755 4678 generic.go:334] "Generic (PLEG): container finished" podID="92e69f8c-3e27-40e9-9745-58c570b67749" containerID="0bca8608b767b68e3ee95b94418253bd50a1d78623f05cbdd3c5b36dcfa75f49" exitCode=0 Nov 24 12:45:06 crc kubenswrapper[4678]: I1124 12:45:06.766838 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k52z" event={"ID":"92e69f8c-3e27-40e9-9745-58c570b67749","Type":"ContainerDied","Data":"0bca8608b767b68e3ee95b94418253bd50a1d78623f05cbdd3c5b36dcfa75f49"} Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.214450 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.302494 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-secret-volume\") pod \"d8842530-6e2e-4c6c-8d7f-c32867f3faa2\" (UID: \"d8842530-6e2e-4c6c-8d7f-c32867f3faa2\") " Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.302733 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvlvv\" (UniqueName: \"kubernetes.io/projected/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-kube-api-access-jvlvv\") pod \"d8842530-6e2e-4c6c-8d7f-c32867f3faa2\" (UID: \"d8842530-6e2e-4c6c-8d7f-c32867f3faa2\") " Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.302824 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-config-volume\") pod \"d8842530-6e2e-4c6c-8d7f-c32867f3faa2\" (UID: \"d8842530-6e2e-4c6c-8d7f-c32867f3faa2\") " Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.314263 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d8842530-6e2e-4c6c-8d7f-c32867f3faa2" (UID: "d8842530-6e2e-4c6c-8d7f-c32867f3faa2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.315633 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-kube-api-access-jvlvv" (OuterVolumeSpecName: "kube-api-access-jvlvv") pod "d8842530-6e2e-4c6c-8d7f-c32867f3faa2" (UID: "d8842530-6e2e-4c6c-8d7f-c32867f3faa2"). InnerVolumeSpecName "kube-api-access-jvlvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.324625 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-config-volume" (OuterVolumeSpecName: "config-volume") pod "d8842530-6e2e-4c6c-8d7f-c32867f3faa2" (UID: "d8842530-6e2e-4c6c-8d7f-c32867f3faa2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.416543 4678 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.416600 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvlvv\" (UniqueName: \"kubernetes.io/projected/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-kube-api-access-jvlvv\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.416612 4678 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8842530-6e2e-4c6c-8d7f-c32867f3faa2-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.542751 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4k52z" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.623570 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e69f8c-3e27-40e9-9745-58c570b67749-utilities\") pod \"92e69f8c-3e27-40e9-9745-58c570b67749\" (UID: \"92e69f8c-3e27-40e9-9745-58c570b67749\") " Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.625570 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92e69f8c-3e27-40e9-9745-58c570b67749-utilities" (OuterVolumeSpecName: "utilities") pod "92e69f8c-3e27-40e9-9745-58c570b67749" (UID: "92e69f8c-3e27-40e9-9745-58c570b67749"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.626224 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e69f8c-3e27-40e9-9745-58c570b67749-catalog-content\") pod \"92e69f8c-3e27-40e9-9745-58c570b67749\" (UID: \"92e69f8c-3e27-40e9-9745-58c570b67749\") " Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.627505 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwkvw\" (UniqueName: \"kubernetes.io/projected/92e69f8c-3e27-40e9-9745-58c570b67749-kube-api-access-zwkvw\") pod \"92e69f8c-3e27-40e9-9745-58c570b67749\" (UID: \"92e69f8c-3e27-40e9-9745-58c570b67749\") " Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.628727 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e69f8c-3e27-40e9-9745-58c570b67749-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.635643 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92e69f8c-3e27-40e9-9745-58c570b67749-kube-api-access-zwkvw" (OuterVolumeSpecName: "kube-api-access-zwkvw") pod "92e69f8c-3e27-40e9-9745-58c570b67749" (UID: "92e69f8c-3e27-40e9-9745-58c570b67749"). InnerVolumeSpecName "kube-api-access-zwkvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.667769 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92e69f8c-3e27-40e9-9745-58c570b67749-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92e69f8c-3e27-40e9-9745-58c570b67749" (UID: "92e69f8c-3e27-40e9-9745-58c570b67749"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.730584 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwkvw\" (UniqueName: \"kubernetes.io/projected/92e69f8c-3e27-40e9-9745-58c570b67749-kube-api-access-zwkvw\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.730631 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e69f8c-3e27-40e9-9745-58c570b67749-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.793456 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k52z" event={"ID":"92e69f8c-3e27-40e9-9745-58c570b67749","Type":"ContainerDied","Data":"e34fda1697be371f7557c953702d247891ac088fdfb7d1a30365fe829d29829a"} Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.793492 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4k52z" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.793537 4678 scope.go:117] "RemoveContainer" containerID="0bca8608b767b68e3ee95b94418253bd50a1d78623f05cbdd3c5b36dcfa75f49" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.798557 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" event={"ID":"d8842530-6e2e-4c6c-8d7f-c32867f3faa2","Type":"ContainerDied","Data":"a7a36d3cd9e796520dcee2350cbf35fa5a9ca556adb2e0225181e8eddca16d26"} Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.798605 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7a36d3cd9e796520dcee2350cbf35fa5a9ca556adb2e0225181e8eddca16d26" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.798613 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-bqpwx" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.877128 4678 scope.go:117] "RemoveContainer" containerID="be818df800eef90ac7f420e0d7a149f5e65e09955f8adcd0f383f71ae326c2e8" Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.881720 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4k52z"] Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.900571 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4k52z"] Nov 24 12:45:08 crc kubenswrapper[4678]: I1124 12:45:08.984207 4678 scope.go:117] "RemoveContainer" containerID="ff5fbea66b9e5fa8e01dd81e1b5d5161a5b71c5baba15954fd1a8b9c3dec200b" Nov 24 12:45:09 crc kubenswrapper[4678]: I1124 12:45:09.295892 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks"] Nov 24 12:45:09 crc kubenswrapper[4678]: I1124 12:45:09.306760 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399760-fsbks"] Nov 24 12:45:09 crc kubenswrapper[4678]: I1124 12:45:09.909104 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92e69f8c-3e27-40e9-9745-58c570b67749" path="/var/lib/kubelet/pods/92e69f8c-3e27-40e9-9745-58c570b67749/volumes" Nov 24 12:45:09 crc kubenswrapper[4678]: I1124 12:45:09.911205 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccb98269-f363-4f12-9736-6f3e6723aa0b" path="/var/lib/kubelet/pods/ccb98269-f363-4f12-9736-6f3e6723aa0b/volumes" Nov 24 12:45:36 crc kubenswrapper[4678]: E1124 12:45:36.301982 4678 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 24 12:45:36 crc kubenswrapper[4678]: E1124 12:45:36.306368 4678 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5v8rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(fa52a8b5-88fb-4f22-b067-edbdcee003ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:45:36 crc kubenswrapper[4678]: E1124 12:45:36.307637 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="fa52a8b5-88fb-4f22-b067-edbdcee003ea" Nov 24 12:45:36 crc kubenswrapper[4678]: E1124 12:45:36.382483 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="fa52a8b5-88fb-4f22-b067-edbdcee003ea" Nov 24 12:45:39 crc kubenswrapper[4678]: I1124 12:45:39.329613 4678 scope.go:117] "RemoveContainer" containerID="675db4a0010d446397f24882a1721cc247eaefe8449622d354dab269c7bf16b0" Nov 24 12:45:39 crc kubenswrapper[4678]: I1124 12:45:39.370629 4678 scope.go:117] "RemoveContainer" containerID="fb373c2764568a518f10e0fc0d365dd4c43c519ad29560db2f87140c013f43a7" Nov 24 12:45:39 crc kubenswrapper[4678]: I1124 12:45:39.413688 4678 scope.go:117] "RemoveContainer" containerID="230b16cf9cbe920c30c0c6cfdf779075941a73894ac08aaac48f062dcde4b05c" Nov 24 12:45:39 crc kubenswrapper[4678]: I1124 12:45:39.476483 4678 scope.go:117] "RemoveContainer" containerID="898e512ce07f91afbf276a656fb9929741073282892a94c4c0cbfb120c507daf" Nov 24 12:45:52 crc kubenswrapper[4678]: I1124 12:45:52.498412 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 24 12:45:54 crc kubenswrapper[4678]: I1124 12:45:54.599878 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fa52a8b5-88fb-4f22-b067-edbdcee003ea","Type":"ContainerStarted","Data":"b328a9428be729c8687d35538da213c2ddeaaeb0521256ea48ae3a6152056db3"} Nov 24 12:45:54 crc kubenswrapper[4678]: I1124 12:45:54.622948 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.553556502 podStartE2EDuration="56.622930302s" podCreationTimestamp="2025-11-24 12:44:58 +0000 UTC" firstStartedPulling="2025-11-24 12:45:00.425855349 +0000 UTC m=+5311.356914988" lastFinishedPulling="2025-11-24 12:45:52.495229139 +0000 UTC m=+5363.426288788" observedRunningTime="2025-11-24 12:45:54.621185284 +0000 UTC m=+5365.552244913" watchObservedRunningTime="2025-11-24 12:45:54.622930302 +0000 UTC m=+5365.553989931" Nov 24 12:46:00 crc kubenswrapper[4678]: I1124 12:46:00.296433 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:46:00 crc kubenswrapper[4678]: I1124 12:46:00.297151 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:46:30 crc kubenswrapper[4678]: I1124 12:46:30.296737 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:46:30 crc kubenswrapper[4678]: I1124 12:46:30.297329 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:47:00 crc kubenswrapper[4678]: I1124 12:47:00.300213 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:47:00 crc kubenswrapper[4678]: I1124 12:47:00.301701 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:47:00 crc kubenswrapper[4678]: I1124 12:47:00.302073 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 12:47:00 crc kubenswrapper[4678]: I1124 12:47:00.304308 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75a7c126087d7d1ddf3a04fd019fd0506ed9d0cf3acde60906561fca1eb78321"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:47:00 crc kubenswrapper[4678]: I1124 12:47:00.306108 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://75a7c126087d7d1ddf3a04fd019fd0506ed9d0cf3acde60906561fca1eb78321" gracePeriod=600 Nov 24 12:47:01 crc kubenswrapper[4678]: I1124 12:47:01.417439 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"75a7c126087d7d1ddf3a04fd019fd0506ed9d0cf3acde60906561fca1eb78321"} Nov 24 12:47:01 crc kubenswrapper[4678]: I1124 12:47:01.420234 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="75a7c126087d7d1ddf3a04fd019fd0506ed9d0cf3acde60906561fca1eb78321" exitCode=0 Nov 24 12:47:01 crc kubenswrapper[4678]: I1124 12:47:01.420332 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87"} Nov 24 12:47:01 crc kubenswrapper[4678]: I1124 12:47:01.423246 4678 scope.go:117] "RemoveContainer" containerID="81be5825c3f19c12e6ec91bf85968ca71c7ecf71c5f592c591e0ca41649dbfaa" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.415293 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f9n8n"] Nov 24 12:47:55 crc kubenswrapper[4678]: E1124 12:47:55.429292 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e69f8c-3e27-40e9-9745-58c570b67749" containerName="extract-utilities" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.429340 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e69f8c-3e27-40e9-9745-58c570b67749" containerName="extract-utilities" Nov 24 12:47:55 crc kubenswrapper[4678]: E1124 12:47:55.429815 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8842530-6e2e-4c6c-8d7f-c32867f3faa2" containerName="collect-profiles" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.429829 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8842530-6e2e-4c6c-8d7f-c32867f3faa2" containerName="collect-profiles" Nov 24 12:47:55 crc kubenswrapper[4678]: E1124 12:47:55.429838 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e69f8c-3e27-40e9-9745-58c570b67749" containerName="registry-server" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.429849 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e69f8c-3e27-40e9-9745-58c570b67749" containerName="registry-server" Nov 24 12:47:55 crc kubenswrapper[4678]: E1124 12:47:55.429875 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e69f8c-3e27-40e9-9745-58c570b67749" containerName="extract-content" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.429884 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e69f8c-3e27-40e9-9745-58c570b67749" containerName="extract-content" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.432645 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="92e69f8c-3e27-40e9-9745-58c570b67749" containerName="registry-server" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.432710 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8842530-6e2e-4c6c-8d7f-c32867f3faa2" containerName="collect-profiles" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.444931 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.501423 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72bf494a-33f9-4467-ab33-dc79151f2216-catalog-content\") pod \"redhat-operators-f9n8n\" (UID: \"72bf494a-33f9-4467-ab33-dc79151f2216\") " pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.502115 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmq8s\" (UniqueName: \"kubernetes.io/projected/72bf494a-33f9-4467-ab33-dc79151f2216-kube-api-access-dmq8s\") pod \"redhat-operators-f9n8n\" (UID: \"72bf494a-33f9-4467-ab33-dc79151f2216\") " pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.502385 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72bf494a-33f9-4467-ab33-dc79151f2216-utilities\") pod \"redhat-operators-f9n8n\" (UID: \"72bf494a-33f9-4467-ab33-dc79151f2216\") " pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.604604 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72bf494a-33f9-4467-ab33-dc79151f2216-catalog-content\") pod \"redhat-operators-f9n8n\" (UID: \"72bf494a-33f9-4467-ab33-dc79151f2216\") " pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.604833 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmq8s\" (UniqueName: \"kubernetes.io/projected/72bf494a-33f9-4467-ab33-dc79151f2216-kube-api-access-dmq8s\") pod \"redhat-operators-f9n8n\" (UID: \"72bf494a-33f9-4467-ab33-dc79151f2216\") " pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.604910 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72bf494a-33f9-4467-ab33-dc79151f2216-utilities\") pod \"redhat-operators-f9n8n\" (UID: \"72bf494a-33f9-4467-ab33-dc79151f2216\") " pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.630448 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72bf494a-33f9-4467-ab33-dc79151f2216-catalog-content\") pod \"redhat-operators-f9n8n\" (UID: \"72bf494a-33f9-4467-ab33-dc79151f2216\") " pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.632058 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72bf494a-33f9-4467-ab33-dc79151f2216-utilities\") pod \"redhat-operators-f9n8n\" (UID: \"72bf494a-33f9-4467-ab33-dc79151f2216\") " pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.661554 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmq8s\" (UniqueName: \"kubernetes.io/projected/72bf494a-33f9-4467-ab33-dc79151f2216-kube-api-access-dmq8s\") pod \"redhat-operators-f9n8n\" (UID: \"72bf494a-33f9-4467-ab33-dc79151f2216\") " pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.683242 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9n8n"] Nov 24 12:47:55 crc kubenswrapper[4678]: I1124 12:47:55.911730 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:47:58 crc kubenswrapper[4678]: I1124 12:47:58.779535 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9n8n"] Nov 24 12:47:58 crc kubenswrapper[4678]: W1124 12:47:58.939399 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72bf494a_33f9_4467_ab33_dc79151f2216.slice/crio-96e7a00a2aa0b702494a538b5cd3a1c23d49382b322d25932787208b3599360a WatchSource:0}: Error finding container 96e7a00a2aa0b702494a538b5cd3a1c23d49382b322d25932787208b3599360a: Status 404 returned error can't find the container with id 96e7a00a2aa0b702494a538b5cd3a1c23d49382b322d25932787208b3599360a Nov 24 12:47:59 crc kubenswrapper[4678]: I1124 12:47:59.107234 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9n8n" event={"ID":"72bf494a-33f9-4467-ab33-dc79151f2216","Type":"ContainerStarted","Data":"96e7a00a2aa0b702494a538b5cd3a1c23d49382b322d25932787208b3599360a"} Nov 24 12:48:00 crc kubenswrapper[4678]: I1124 12:48:00.119870 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9n8n" event={"ID":"72bf494a-33f9-4467-ab33-dc79151f2216","Type":"ContainerDied","Data":"aaef3871ca662047305f466f01147f9c7296715a9898aad8d10e1bcbfc6a9174"} Nov 24 12:48:00 crc kubenswrapper[4678]: I1124 12:48:00.123785 4678 generic.go:334] "Generic (PLEG): container finished" podID="72bf494a-33f9-4467-ab33-dc79151f2216" containerID="aaef3871ca662047305f466f01147f9c7296715a9898aad8d10e1bcbfc6a9174" exitCode=0 Nov 24 12:48:03 crc kubenswrapper[4678]: I1124 12:48:03.157290 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9n8n" event={"ID":"72bf494a-33f9-4467-ab33-dc79151f2216","Type":"ContainerStarted","Data":"471b41ce6a154a0ea15be43912e9b3555e5490f5af216eac72cf04d149755ba7"} Nov 24 12:48:08 crc kubenswrapper[4678]: I1124 12:48:08.213449 4678 generic.go:334] "Generic (PLEG): container finished" podID="72bf494a-33f9-4467-ab33-dc79151f2216" containerID="471b41ce6a154a0ea15be43912e9b3555e5490f5af216eac72cf04d149755ba7" exitCode=0 Nov 24 12:48:08 crc kubenswrapper[4678]: I1124 12:48:08.213526 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9n8n" event={"ID":"72bf494a-33f9-4467-ab33-dc79151f2216","Type":"ContainerDied","Data":"471b41ce6a154a0ea15be43912e9b3555e5490f5af216eac72cf04d149755ba7"} Nov 24 12:48:09 crc kubenswrapper[4678]: I1124 12:48:09.231947 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9n8n" event={"ID":"72bf494a-33f9-4467-ab33-dc79151f2216","Type":"ContainerStarted","Data":"e70c1438262915e1db34a3811d0cc65c42d39414eeaee5b4f899279b9a22391d"} Nov 24 12:48:09 crc kubenswrapper[4678]: I1124 12:48:09.274090 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f9n8n" podStartSLOduration=5.74223585 podStartE2EDuration="14.271171265s" podCreationTimestamp="2025-11-24 12:47:55 +0000 UTC" firstStartedPulling="2025-11-24 12:48:00.122169956 +0000 UTC m=+5491.053229595" lastFinishedPulling="2025-11-24 12:48:08.651105371 +0000 UTC m=+5499.582165010" observedRunningTime="2025-11-24 12:48:09.260625752 +0000 UTC m=+5500.191685411" watchObservedRunningTime="2025-11-24 12:48:09.271171265 +0000 UTC m=+5500.202230904" Nov 24 12:48:15 crc kubenswrapper[4678]: I1124 12:48:15.913066 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:48:15 crc kubenswrapper[4678]: I1124 12:48:15.913631 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:48:16 crc kubenswrapper[4678]: I1124 12:48:16.964870 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f9n8n" podUID="72bf494a-33f9-4467-ab33-dc79151f2216" containerName="registry-server" probeResult="failure" output=< Nov 24 12:48:16 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:48:16 crc kubenswrapper[4678]: > Nov 24 12:48:26 crc kubenswrapper[4678]: I1124 12:48:26.972816 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f9n8n" podUID="72bf494a-33f9-4467-ab33-dc79151f2216" containerName="registry-server" probeResult="failure" output=< Nov 24 12:48:26 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:48:26 crc kubenswrapper[4678]: > Nov 24 12:48:36 crc kubenswrapper[4678]: I1124 12:48:36.965658 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f9n8n" podUID="72bf494a-33f9-4467-ab33-dc79151f2216" containerName="registry-server" probeResult="failure" output=< Nov 24 12:48:36 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:48:36 crc kubenswrapper[4678]: > Nov 24 12:48:46 crc kubenswrapper[4678]: I1124 12:48:46.975011 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f9n8n" podUID="72bf494a-33f9-4467-ab33-dc79151f2216" containerName="registry-server" probeResult="failure" output=< Nov 24 12:48:46 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:48:46 crc kubenswrapper[4678]: > Nov 24 12:48:56 crc kubenswrapper[4678]: I1124 12:48:56.981597 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f9n8n" podUID="72bf494a-33f9-4467-ab33-dc79151f2216" containerName="registry-server" probeResult="failure" output=< Nov 24 12:48:56 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:48:56 crc kubenswrapper[4678]: > Nov 24 12:49:00 crc kubenswrapper[4678]: I1124 12:49:00.300218 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:49:00 crc kubenswrapper[4678]: I1124 12:49:00.308316 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:49:06 crc kubenswrapper[4678]: I1124 12:49:06.968134 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f9n8n" podUID="72bf494a-33f9-4467-ab33-dc79151f2216" containerName="registry-server" probeResult="failure" output=< Nov 24 12:49:06 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:49:06 crc kubenswrapper[4678]: > Nov 24 12:49:15 crc kubenswrapper[4678]: I1124 12:49:15.998754 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:49:16 crc kubenswrapper[4678]: I1124 12:49:16.081200 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:49:16 crc kubenswrapper[4678]: I1124 12:49:16.332876 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9n8n"] Nov 24 12:49:17 crc kubenswrapper[4678]: I1124 12:49:17.186800 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f9n8n" podUID="72bf494a-33f9-4467-ab33-dc79151f2216" containerName="registry-server" containerID="cri-o://e70c1438262915e1db34a3811d0cc65c42d39414eeaee5b4f899279b9a22391d" gracePeriod=2 Nov 24 12:49:18 crc kubenswrapper[4678]: I1124 12:49:18.198415 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9n8n" event={"ID":"72bf494a-33f9-4467-ab33-dc79151f2216","Type":"ContainerDied","Data":"e70c1438262915e1db34a3811d0cc65c42d39414eeaee5b4f899279b9a22391d"} Nov 24 12:49:18 crc kubenswrapper[4678]: I1124 12:49:18.198310 4678 generic.go:334] "Generic (PLEG): container finished" podID="72bf494a-33f9-4467-ab33-dc79151f2216" containerID="e70c1438262915e1db34a3811d0cc65c42d39414eeaee5b4f899279b9a22391d" exitCode=0 Nov 24 12:49:18 crc kubenswrapper[4678]: I1124 12:49:18.748703 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:49:18 crc kubenswrapper[4678]: I1124 12:49:18.819509 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72bf494a-33f9-4467-ab33-dc79151f2216-catalog-content\") pod \"72bf494a-33f9-4467-ab33-dc79151f2216\" (UID: \"72bf494a-33f9-4467-ab33-dc79151f2216\") " Nov 24 12:49:18 crc kubenswrapper[4678]: I1124 12:49:18.819839 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmq8s\" (UniqueName: \"kubernetes.io/projected/72bf494a-33f9-4467-ab33-dc79151f2216-kube-api-access-dmq8s\") pod \"72bf494a-33f9-4467-ab33-dc79151f2216\" (UID: \"72bf494a-33f9-4467-ab33-dc79151f2216\") " Nov 24 12:49:18 crc kubenswrapper[4678]: I1124 12:49:18.819976 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72bf494a-33f9-4467-ab33-dc79151f2216-utilities\") pod \"72bf494a-33f9-4467-ab33-dc79151f2216\" (UID: \"72bf494a-33f9-4467-ab33-dc79151f2216\") " Nov 24 12:49:18 crc kubenswrapper[4678]: I1124 12:49:18.827021 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72bf494a-33f9-4467-ab33-dc79151f2216-utilities" (OuterVolumeSpecName: "utilities") pod "72bf494a-33f9-4467-ab33-dc79151f2216" (UID: "72bf494a-33f9-4467-ab33-dc79151f2216"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:49:18 crc kubenswrapper[4678]: I1124 12:49:18.855322 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72bf494a-33f9-4467-ab33-dc79151f2216-kube-api-access-dmq8s" (OuterVolumeSpecName: "kube-api-access-dmq8s") pod "72bf494a-33f9-4467-ab33-dc79151f2216" (UID: "72bf494a-33f9-4467-ab33-dc79151f2216"). InnerVolumeSpecName "kube-api-access-dmq8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:49:18 crc kubenswrapper[4678]: I1124 12:49:18.923572 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmq8s\" (UniqueName: \"kubernetes.io/projected/72bf494a-33f9-4467-ab33-dc79151f2216-kube-api-access-dmq8s\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:18 crc kubenswrapper[4678]: I1124 12:49:18.923612 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72bf494a-33f9-4467-ab33-dc79151f2216-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.096962 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72bf494a-33f9-4467-ab33-dc79151f2216-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72bf494a-33f9-4467-ab33-dc79151f2216" (UID: "72bf494a-33f9-4467-ab33-dc79151f2216"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.135482 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72bf494a-33f9-4467-ab33-dc79151f2216-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.230571 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9n8n" event={"ID":"72bf494a-33f9-4467-ab33-dc79151f2216","Type":"ContainerDied","Data":"96e7a00a2aa0b702494a538b5cd3a1c23d49382b322d25932787208b3599360a"} Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.230695 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9n8n" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.232879 4678 scope.go:117] "RemoveContainer" containerID="e70c1438262915e1db34a3811d0cc65c42d39414eeaee5b4f899279b9a22391d" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.235237 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6f2qg"] Nov 24 12:49:19 crc kubenswrapper[4678]: E1124 12:49:19.239592 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72bf494a-33f9-4467-ab33-dc79151f2216" containerName="registry-server" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.239644 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="72bf494a-33f9-4467-ab33-dc79151f2216" containerName="registry-server" Nov 24 12:49:19 crc kubenswrapper[4678]: E1124 12:49:19.239708 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72bf494a-33f9-4467-ab33-dc79151f2216" containerName="extract-content" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.239716 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="72bf494a-33f9-4467-ab33-dc79151f2216" containerName="extract-content" Nov 24 12:49:19 crc kubenswrapper[4678]: E1124 12:49:19.239760 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72bf494a-33f9-4467-ab33-dc79151f2216" containerName="extract-utilities" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.239769 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="72bf494a-33f9-4467-ab33-dc79151f2216" containerName="extract-utilities" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.241314 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="72bf494a-33f9-4467-ab33-dc79151f2216" containerName="registry-server" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.263502 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.287163 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9n8n"] Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.322208 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f9n8n"] Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.347487 4678 scope.go:117] "RemoveContainer" containerID="471b41ce6a154a0ea15be43912e9b3555e5490f5af216eac72cf04d149755ba7" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.347929 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmhq7\" (UniqueName: \"kubernetes.io/projected/c89c756d-b550-41f2-bfb5-beffdae2bd2a-kube-api-access-kmhq7\") pod \"certified-operators-6f2qg\" (UID: \"c89c756d-b550-41f2-bfb5-beffdae2bd2a\") " pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.348238 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c89c756d-b550-41f2-bfb5-beffdae2bd2a-catalog-content\") pod \"certified-operators-6f2qg\" (UID: \"c89c756d-b550-41f2-bfb5-beffdae2bd2a\") " pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.349121 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c89c756d-b550-41f2-bfb5-beffdae2bd2a-utilities\") pod \"certified-operators-6f2qg\" (UID: \"c89c756d-b550-41f2-bfb5-beffdae2bd2a\") " pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.357799 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6f2qg"] Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.416331 4678 scope.go:117] "RemoveContainer" containerID="aaef3871ca662047305f466f01147f9c7296715a9898aad8d10e1bcbfc6a9174" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.451016 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c89c756d-b550-41f2-bfb5-beffdae2bd2a-utilities\") pod \"certified-operators-6f2qg\" (UID: \"c89c756d-b550-41f2-bfb5-beffdae2bd2a\") " pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.451244 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmhq7\" (UniqueName: \"kubernetes.io/projected/c89c756d-b550-41f2-bfb5-beffdae2bd2a-kube-api-access-kmhq7\") pod \"certified-operators-6f2qg\" (UID: \"c89c756d-b550-41f2-bfb5-beffdae2bd2a\") " pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.451327 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c89c756d-b550-41f2-bfb5-beffdae2bd2a-catalog-content\") pod \"certified-operators-6f2qg\" (UID: \"c89c756d-b550-41f2-bfb5-beffdae2bd2a\") " pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.451625 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c89c756d-b550-41f2-bfb5-beffdae2bd2a-utilities\") pod \"certified-operators-6f2qg\" (UID: \"c89c756d-b550-41f2-bfb5-beffdae2bd2a\") " pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.451791 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c89c756d-b550-41f2-bfb5-beffdae2bd2a-catalog-content\") pod \"certified-operators-6f2qg\" (UID: \"c89c756d-b550-41f2-bfb5-beffdae2bd2a\") " pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.478750 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmhq7\" (UniqueName: \"kubernetes.io/projected/c89c756d-b550-41f2-bfb5-beffdae2bd2a-kube-api-access-kmhq7\") pod \"certified-operators-6f2qg\" (UID: \"c89c756d-b550-41f2-bfb5-beffdae2bd2a\") " pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.612964 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:19 crc kubenswrapper[4678]: I1124 12:49:19.914048 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72bf494a-33f9-4467-ab33-dc79151f2216" path="/var/lib/kubelet/pods/72bf494a-33f9-4467-ab33-dc79151f2216/volumes" Nov 24 12:49:20 crc kubenswrapper[4678]: I1124 12:49:20.315323 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6f2qg"] Nov 24 12:49:21 crc kubenswrapper[4678]: I1124 12:49:21.267126 4678 generic.go:334] "Generic (PLEG): container finished" podID="c89c756d-b550-41f2-bfb5-beffdae2bd2a" containerID="b4e57ee668deea6bfe68aaf5e02caea3660b8ca2602baa289dcc2883311470f3" exitCode=0 Nov 24 12:49:21 crc kubenswrapper[4678]: I1124 12:49:21.267181 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f2qg" event={"ID":"c89c756d-b550-41f2-bfb5-beffdae2bd2a","Type":"ContainerDied","Data":"b4e57ee668deea6bfe68aaf5e02caea3660b8ca2602baa289dcc2883311470f3"} Nov 24 12:49:21 crc kubenswrapper[4678]: I1124 12:49:21.267215 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f2qg" event={"ID":"c89c756d-b550-41f2-bfb5-beffdae2bd2a","Type":"ContainerStarted","Data":"c5264d5a9703a72b5ccb8396634cbdca015ca29128cd4eaccc41b991e7c2dcb9"} Nov 24 12:49:23 crc kubenswrapper[4678]: I1124 12:49:23.297618 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f2qg" event={"ID":"c89c756d-b550-41f2-bfb5-beffdae2bd2a","Type":"ContainerStarted","Data":"fd860f8d1288eb0ccda373aa4092bfbf7c547f7429a7f6bdd143bb348a01b86b"} Nov 24 12:49:25 crc kubenswrapper[4678]: I1124 12:49:25.327146 4678 generic.go:334] "Generic (PLEG): container finished" podID="c89c756d-b550-41f2-bfb5-beffdae2bd2a" containerID="fd860f8d1288eb0ccda373aa4092bfbf7c547f7429a7f6bdd143bb348a01b86b" exitCode=0 Nov 24 12:49:25 crc kubenswrapper[4678]: I1124 12:49:25.327264 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f2qg" event={"ID":"c89c756d-b550-41f2-bfb5-beffdae2bd2a","Type":"ContainerDied","Data":"fd860f8d1288eb0ccda373aa4092bfbf7c547f7429a7f6bdd143bb348a01b86b"} Nov 24 12:49:27 crc kubenswrapper[4678]: I1124 12:49:27.366947 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f2qg" event={"ID":"c89c756d-b550-41f2-bfb5-beffdae2bd2a","Type":"ContainerStarted","Data":"338551eaae022a0db6554f72625af820872b87608d7a6ccd493ef22e08e817fb"} Nov 24 12:49:27 crc kubenswrapper[4678]: I1124 12:49:27.393075 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6f2qg" podStartSLOduration=3.571220382 podStartE2EDuration="8.390768626s" podCreationTimestamp="2025-11-24 12:49:19 +0000 UTC" firstStartedPulling="2025-11-24 12:49:21.271890641 +0000 UTC m=+5572.202950280" lastFinishedPulling="2025-11-24 12:49:26.091438885 +0000 UTC m=+5577.022498524" observedRunningTime="2025-11-24 12:49:27.385084425 +0000 UTC m=+5578.316144064" watchObservedRunningTime="2025-11-24 12:49:27.390768626 +0000 UTC m=+5578.321828265" Nov 24 12:49:29 crc kubenswrapper[4678]: I1124 12:49:29.613345 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:29 crc kubenswrapper[4678]: I1124 12:49:29.613686 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:30 crc kubenswrapper[4678]: I1124 12:49:30.305684 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:49:30 crc kubenswrapper[4678]: I1124 12:49:30.332561 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:49:30 crc kubenswrapper[4678]: I1124 12:49:30.668856 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6f2qg" podUID="c89c756d-b550-41f2-bfb5-beffdae2bd2a" containerName="registry-server" probeResult="failure" output=< Nov 24 12:49:30 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:49:30 crc kubenswrapper[4678]: > Nov 24 12:49:40 crc kubenswrapper[4678]: I1124 12:49:40.736830 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6f2qg" podUID="c89c756d-b550-41f2-bfb5-beffdae2bd2a" containerName="registry-server" probeResult="failure" output=< Nov 24 12:49:40 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:49:40 crc kubenswrapper[4678]: > Nov 24 12:49:49 crc kubenswrapper[4678]: I1124 12:49:49.673008 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:49 crc kubenswrapper[4678]: I1124 12:49:49.728601 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:50 crc kubenswrapper[4678]: I1124 12:49:50.377883 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6f2qg"] Nov 24 12:49:51 crc kubenswrapper[4678]: I1124 12:49:51.632634 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6f2qg" podUID="c89c756d-b550-41f2-bfb5-beffdae2bd2a" containerName="registry-server" containerID="cri-o://338551eaae022a0db6554f72625af820872b87608d7a6ccd493ef22e08e817fb" gracePeriod=2 Nov 24 12:49:52 crc kubenswrapper[4678]: I1124 12:49:52.653289 4678 generic.go:334] "Generic (PLEG): container finished" podID="c89c756d-b550-41f2-bfb5-beffdae2bd2a" containerID="338551eaae022a0db6554f72625af820872b87608d7a6ccd493ef22e08e817fb" exitCode=0 Nov 24 12:49:52 crc kubenswrapper[4678]: I1124 12:49:52.653726 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f2qg" event={"ID":"c89c756d-b550-41f2-bfb5-beffdae2bd2a","Type":"ContainerDied","Data":"338551eaae022a0db6554f72625af820872b87608d7a6ccd493ef22e08e817fb"} Nov 24 12:49:52 crc kubenswrapper[4678]: I1124 12:49:52.968968 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.033781 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c89c756d-b550-41f2-bfb5-beffdae2bd2a-catalog-content\") pod \"c89c756d-b550-41f2-bfb5-beffdae2bd2a\" (UID: \"c89c756d-b550-41f2-bfb5-beffdae2bd2a\") " Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.034020 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmhq7\" (UniqueName: \"kubernetes.io/projected/c89c756d-b550-41f2-bfb5-beffdae2bd2a-kube-api-access-kmhq7\") pod \"c89c756d-b550-41f2-bfb5-beffdae2bd2a\" (UID: \"c89c756d-b550-41f2-bfb5-beffdae2bd2a\") " Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.034559 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c89c756d-b550-41f2-bfb5-beffdae2bd2a-utilities\") pod \"c89c756d-b550-41f2-bfb5-beffdae2bd2a\" (UID: \"c89c756d-b550-41f2-bfb5-beffdae2bd2a\") " Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.038987 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c89c756d-b550-41f2-bfb5-beffdae2bd2a-utilities" (OuterVolumeSpecName: "utilities") pod "c89c756d-b550-41f2-bfb5-beffdae2bd2a" (UID: "c89c756d-b550-41f2-bfb5-beffdae2bd2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.058089 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c89c756d-b550-41f2-bfb5-beffdae2bd2a-kube-api-access-kmhq7" (OuterVolumeSpecName: "kube-api-access-kmhq7") pod "c89c756d-b550-41f2-bfb5-beffdae2bd2a" (UID: "c89c756d-b550-41f2-bfb5-beffdae2bd2a"). InnerVolumeSpecName "kube-api-access-kmhq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.139637 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c89c756d-b550-41f2-bfb5-beffdae2bd2a-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.139704 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmhq7\" (UniqueName: \"kubernetes.io/projected/c89c756d-b550-41f2-bfb5-beffdae2bd2a-kube-api-access-kmhq7\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.148296 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c89c756d-b550-41f2-bfb5-beffdae2bd2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c89c756d-b550-41f2-bfb5-beffdae2bd2a" (UID: "c89c756d-b550-41f2-bfb5-beffdae2bd2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.241841 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c89c756d-b550-41f2-bfb5-beffdae2bd2a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.686572 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f2qg" event={"ID":"c89c756d-b550-41f2-bfb5-beffdae2bd2a","Type":"ContainerDied","Data":"c5264d5a9703a72b5ccb8396634cbdca015ca29128cd4eaccc41b991e7c2dcb9"} Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.686844 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6f2qg" Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.688021 4678 scope.go:117] "RemoveContainer" containerID="338551eaae022a0db6554f72625af820872b87608d7a6ccd493ef22e08e817fb" Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.727431 4678 scope.go:117] "RemoveContainer" containerID="fd860f8d1288eb0ccda373aa4092bfbf7c547f7429a7f6bdd143bb348a01b86b" Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.760858 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6f2qg"] Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.788725 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6f2qg"] Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.802840 4678 scope.go:117] "RemoveContainer" containerID="b4e57ee668deea6bfe68aaf5e02caea3660b8ca2602baa289dcc2883311470f3" Nov 24 12:49:53 crc kubenswrapper[4678]: I1124 12:49:53.910728 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c89c756d-b550-41f2-bfb5-beffdae2bd2a" path="/var/lib/kubelet/pods/c89c756d-b550-41f2-bfb5-beffdae2bd2a/volumes" Nov 24 12:50:00 crc kubenswrapper[4678]: I1124 12:50:00.296865 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:50:00 crc kubenswrapper[4678]: I1124 12:50:00.297395 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:50:00 crc kubenswrapper[4678]: I1124 12:50:00.297453 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 12:50:00 crc kubenswrapper[4678]: I1124 12:50:00.298425 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:50:00 crc kubenswrapper[4678]: I1124 12:50:00.298499 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" gracePeriod=600 Nov 24 12:50:00 crc kubenswrapper[4678]: E1124 12:50:00.424144 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:50:00 crc kubenswrapper[4678]: I1124 12:50:00.776646 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" exitCode=0 Nov 24 12:50:00 crc kubenswrapper[4678]: I1124 12:50:00.776701 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87"} Nov 24 12:50:00 crc kubenswrapper[4678]: I1124 12:50:00.776815 4678 scope.go:117] "RemoveContainer" containerID="75a7c126087d7d1ddf3a04fd019fd0506ed9d0cf3acde60906561fca1eb78321" Nov 24 12:50:00 crc kubenswrapper[4678]: I1124 12:50:00.777577 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:50:00 crc kubenswrapper[4678]: E1124 12:50:00.777905 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:50:11 crc kubenswrapper[4678]: I1124 12:50:11.897378 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:50:11 crc kubenswrapper[4678]: E1124 12:50:11.898257 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:50:23 crc kubenswrapper[4678]: I1124 12:50:23.896143 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:50:23 crc kubenswrapper[4678]: E1124 12:50:23.897027 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:50:35 crc kubenswrapper[4678]: I1124 12:50:35.902286 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:50:35 crc kubenswrapper[4678]: E1124 12:50:35.903553 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:50:49 crc kubenswrapper[4678]: I1124 12:50:49.905902 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:50:49 crc kubenswrapper[4678]: E1124 12:50:49.906712 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:51:03 crc kubenswrapper[4678]: I1124 12:51:03.895984 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:51:03 crc kubenswrapper[4678]: E1124 12:51:03.896940 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:51:18 crc kubenswrapper[4678]: I1124 12:51:18.899582 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:51:18 crc kubenswrapper[4678]: E1124 12:51:18.901060 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:51:33 crc kubenswrapper[4678]: I1124 12:51:33.896433 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:51:33 crc kubenswrapper[4678]: E1124 12:51:33.897354 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:51:45 crc kubenswrapper[4678]: I1124 12:51:45.896077 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:51:45 crc kubenswrapper[4678]: E1124 12:51:45.897004 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:51:58 crc kubenswrapper[4678]: I1124 12:51:58.896510 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:51:58 crc kubenswrapper[4678]: E1124 12:51:58.897908 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:52:12 crc kubenswrapper[4678]: I1124 12:52:12.895732 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:52:12 crc kubenswrapper[4678]: E1124 12:52:12.896478 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:52:25 crc kubenswrapper[4678]: I1124 12:52:25.895858 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:52:25 crc kubenswrapper[4678]: E1124 12:52:25.896882 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:52:40 crc kubenswrapper[4678]: I1124 12:52:40.895207 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:52:40 crc kubenswrapper[4678]: E1124 12:52:40.896023 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:52:52 crc kubenswrapper[4678]: I1124 12:52:52.896647 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:52:52 crc kubenswrapper[4678]: E1124 12:52:52.898211 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:53:04 crc kubenswrapper[4678]: I1124 12:53:04.896428 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:53:04 crc kubenswrapper[4678]: E1124 12:53:04.897194 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:53:18 crc kubenswrapper[4678]: I1124 12:53:18.897103 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:53:18 crc kubenswrapper[4678]: E1124 12:53:18.898415 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:53:33 crc kubenswrapper[4678]: I1124 12:53:33.896755 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:53:33 crc kubenswrapper[4678]: E1124 12:53:33.897799 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:53:46 crc kubenswrapper[4678]: I1124 12:53:46.895711 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:53:46 crc kubenswrapper[4678]: E1124 12:53:46.896587 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:53:59 crc kubenswrapper[4678]: I1124 12:53:59.904063 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:53:59 crc kubenswrapper[4678]: E1124 12:53:59.904954 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:54:13 crc kubenswrapper[4678]: I1124 12:54:13.896440 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:54:13 crc kubenswrapper[4678]: E1124 12:54:13.897339 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:54:24 crc kubenswrapper[4678]: I1124 12:54:24.896862 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:54:24 crc kubenswrapper[4678]: E1124 12:54:24.898330 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:54:36 crc kubenswrapper[4678]: I1124 12:54:36.896784 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:54:36 crc kubenswrapper[4678]: E1124 12:54:36.897634 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.142992 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fgszg"] Nov 24 12:54:43 crc kubenswrapper[4678]: E1124 12:54:43.147090 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89c756d-b550-41f2-bfb5-beffdae2bd2a" containerName="extract-utilities" Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.147120 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89c756d-b550-41f2-bfb5-beffdae2bd2a" containerName="extract-utilities" Nov 24 12:54:43 crc kubenswrapper[4678]: E1124 12:54:43.147154 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89c756d-b550-41f2-bfb5-beffdae2bd2a" containerName="registry-server" Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.147165 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89c756d-b550-41f2-bfb5-beffdae2bd2a" containerName="registry-server" Nov 24 12:54:43 crc kubenswrapper[4678]: E1124 12:54:43.147195 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89c756d-b550-41f2-bfb5-beffdae2bd2a" containerName="extract-content" Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.147205 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89c756d-b550-41f2-bfb5-beffdae2bd2a" containerName="extract-content" Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.149742 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="c89c756d-b550-41f2-bfb5-beffdae2bd2a" containerName="registry-server" Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.156187 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.240006 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgszg"] Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.291045 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4159a453-391a-459b-8d73-eb43ed4cbff3-catalog-content\") pod \"redhat-marketplace-fgszg\" (UID: \"4159a453-391a-459b-8d73-eb43ed4cbff3\") " pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.291321 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg8wp\" (UniqueName: \"kubernetes.io/projected/4159a453-391a-459b-8d73-eb43ed4cbff3-kube-api-access-rg8wp\") pod \"redhat-marketplace-fgszg\" (UID: \"4159a453-391a-459b-8d73-eb43ed4cbff3\") " pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.291516 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4159a453-391a-459b-8d73-eb43ed4cbff3-utilities\") pod \"redhat-marketplace-fgszg\" (UID: \"4159a453-391a-459b-8d73-eb43ed4cbff3\") " pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.394527 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4159a453-391a-459b-8d73-eb43ed4cbff3-utilities\") pod \"redhat-marketplace-fgszg\" (UID: \"4159a453-391a-459b-8d73-eb43ed4cbff3\") " pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.394807 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4159a453-391a-459b-8d73-eb43ed4cbff3-catalog-content\") pod \"redhat-marketplace-fgszg\" (UID: \"4159a453-391a-459b-8d73-eb43ed4cbff3\") " pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.394850 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg8wp\" (UniqueName: \"kubernetes.io/projected/4159a453-391a-459b-8d73-eb43ed4cbff3-kube-api-access-rg8wp\") pod \"redhat-marketplace-fgszg\" (UID: \"4159a453-391a-459b-8d73-eb43ed4cbff3\") " pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.397438 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4159a453-391a-459b-8d73-eb43ed4cbff3-utilities\") pod \"redhat-marketplace-fgszg\" (UID: \"4159a453-391a-459b-8d73-eb43ed4cbff3\") " pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.399357 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4159a453-391a-459b-8d73-eb43ed4cbff3-catalog-content\") pod \"redhat-marketplace-fgszg\" (UID: \"4159a453-391a-459b-8d73-eb43ed4cbff3\") " pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.429315 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg8wp\" (UniqueName: \"kubernetes.io/projected/4159a453-391a-459b-8d73-eb43ed4cbff3-kube-api-access-rg8wp\") pod \"redhat-marketplace-fgszg\" (UID: \"4159a453-391a-459b-8d73-eb43ed4cbff3\") " pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:43 crc kubenswrapper[4678]: I1124 12:54:43.492853 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:44 crc kubenswrapper[4678]: I1124 12:54:44.128502 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgszg"] Nov 24 12:54:44 crc kubenswrapper[4678]: I1124 12:54:44.944770 4678 generic.go:334] "Generic (PLEG): container finished" podID="4159a453-391a-459b-8d73-eb43ed4cbff3" containerID="386e7a947f3837d769d91e9dc93eb2e5774c4f806e131780e8edd7af0cfe094c" exitCode=0 Nov 24 12:54:44 crc kubenswrapper[4678]: I1124 12:54:44.944899 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgszg" event={"ID":"4159a453-391a-459b-8d73-eb43ed4cbff3","Type":"ContainerDied","Data":"386e7a947f3837d769d91e9dc93eb2e5774c4f806e131780e8edd7af0cfe094c"} Nov 24 12:54:44 crc kubenswrapper[4678]: I1124 12:54:44.946653 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgszg" event={"ID":"4159a453-391a-459b-8d73-eb43ed4cbff3","Type":"ContainerStarted","Data":"0ad82a80ad1868eb5475c23decb81a909d2c3bb1df57e6ca18a680f79e4bd328"} Nov 24 12:54:44 crc kubenswrapper[4678]: I1124 12:54:44.954690 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:54:45 crc kubenswrapper[4678]: I1124 12:54:45.301340 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hl2wg"] Nov 24 12:54:45 crc kubenswrapper[4678]: I1124 12:54:45.304638 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:54:45 crc kubenswrapper[4678]: I1124 12:54:45.344272 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hl2wg"] Nov 24 12:54:45 crc kubenswrapper[4678]: I1124 12:54:45.447522 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5edd32c6-e198-430f-a3c6-03e9cf79a912-utilities\") pod \"community-operators-hl2wg\" (UID: \"5edd32c6-e198-430f-a3c6-03e9cf79a912\") " pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:54:45 crc kubenswrapper[4678]: I1124 12:54:45.447594 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pkns\" (UniqueName: \"kubernetes.io/projected/5edd32c6-e198-430f-a3c6-03e9cf79a912-kube-api-access-8pkns\") pod \"community-operators-hl2wg\" (UID: \"5edd32c6-e198-430f-a3c6-03e9cf79a912\") " pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:54:45 crc kubenswrapper[4678]: I1124 12:54:45.447734 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5edd32c6-e198-430f-a3c6-03e9cf79a912-catalog-content\") pod \"community-operators-hl2wg\" (UID: \"5edd32c6-e198-430f-a3c6-03e9cf79a912\") " pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:54:45 crc kubenswrapper[4678]: I1124 12:54:45.549718 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5edd32c6-e198-430f-a3c6-03e9cf79a912-utilities\") pod \"community-operators-hl2wg\" (UID: \"5edd32c6-e198-430f-a3c6-03e9cf79a912\") " pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:54:45 crc kubenswrapper[4678]: I1124 12:54:45.549778 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pkns\" (UniqueName: \"kubernetes.io/projected/5edd32c6-e198-430f-a3c6-03e9cf79a912-kube-api-access-8pkns\") pod \"community-operators-hl2wg\" (UID: \"5edd32c6-e198-430f-a3c6-03e9cf79a912\") " pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:54:45 crc kubenswrapper[4678]: I1124 12:54:45.549854 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5edd32c6-e198-430f-a3c6-03e9cf79a912-catalog-content\") pod \"community-operators-hl2wg\" (UID: \"5edd32c6-e198-430f-a3c6-03e9cf79a912\") " pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:54:45 crc kubenswrapper[4678]: I1124 12:54:45.551581 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5edd32c6-e198-430f-a3c6-03e9cf79a912-utilities\") pod \"community-operators-hl2wg\" (UID: \"5edd32c6-e198-430f-a3c6-03e9cf79a912\") " pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:54:45 crc kubenswrapper[4678]: I1124 12:54:45.551632 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5edd32c6-e198-430f-a3c6-03e9cf79a912-catalog-content\") pod \"community-operators-hl2wg\" (UID: \"5edd32c6-e198-430f-a3c6-03e9cf79a912\") " pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:54:45 crc kubenswrapper[4678]: I1124 12:54:45.569692 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pkns\" (UniqueName: \"kubernetes.io/projected/5edd32c6-e198-430f-a3c6-03e9cf79a912-kube-api-access-8pkns\") pod \"community-operators-hl2wg\" (UID: \"5edd32c6-e198-430f-a3c6-03e9cf79a912\") " pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:54:45 crc kubenswrapper[4678]: I1124 12:54:45.631970 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:54:46 crc kubenswrapper[4678]: I1124 12:54:46.236325 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hl2wg"] Nov 24 12:54:46 crc kubenswrapper[4678]: I1124 12:54:46.976779 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgszg" event={"ID":"4159a453-391a-459b-8d73-eb43ed4cbff3","Type":"ContainerStarted","Data":"09f321ffeba8c4067afbc44fe6da25b3c0159b087978637931de7cbe88f047a7"} Nov 24 12:54:46 crc kubenswrapper[4678]: I1124 12:54:46.980113 4678 generic.go:334] "Generic (PLEG): container finished" podID="5edd32c6-e198-430f-a3c6-03e9cf79a912" containerID="6924e392814aafe24478a486fbc818dea7fafadfd49beadb12fc3caf23dc9033" exitCode=0 Nov 24 12:54:46 crc kubenswrapper[4678]: I1124 12:54:46.980165 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl2wg" event={"ID":"5edd32c6-e198-430f-a3c6-03e9cf79a912","Type":"ContainerDied","Data":"6924e392814aafe24478a486fbc818dea7fafadfd49beadb12fc3caf23dc9033"} Nov 24 12:54:46 crc kubenswrapper[4678]: I1124 12:54:46.980193 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl2wg" event={"ID":"5edd32c6-e198-430f-a3c6-03e9cf79a912","Type":"ContainerStarted","Data":"34898f3eaf8b73f8b35b3ca12e44fad526d5e061efc83258311df637a568291b"} Nov 24 12:54:47 crc kubenswrapper[4678]: I1124 12:54:47.992868 4678 generic.go:334] "Generic (PLEG): container finished" podID="4159a453-391a-459b-8d73-eb43ed4cbff3" containerID="09f321ffeba8c4067afbc44fe6da25b3c0159b087978637931de7cbe88f047a7" exitCode=0 Nov 24 12:54:47 crc kubenswrapper[4678]: I1124 12:54:47.993139 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgszg" event={"ID":"4159a453-391a-459b-8d73-eb43ed4cbff3","Type":"ContainerDied","Data":"09f321ffeba8c4067afbc44fe6da25b3c0159b087978637931de7cbe88f047a7"} Nov 24 12:54:49 crc kubenswrapper[4678]: I1124 12:54:49.013798 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgszg" event={"ID":"4159a453-391a-459b-8d73-eb43ed4cbff3","Type":"ContainerStarted","Data":"e14fe075fe4b4f704f1fd390f25e952b88d52a2dbf19ad5c07d3db71a86e843f"} Nov 24 12:54:49 crc kubenswrapper[4678]: I1124 12:54:49.016267 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl2wg" event={"ID":"5edd32c6-e198-430f-a3c6-03e9cf79a912","Type":"ContainerStarted","Data":"fa823991aaf730ebebfd7dd10f89782c9af50faf651985b8c5d9bc707d175a68"} Nov 24 12:54:49 crc kubenswrapper[4678]: I1124 12:54:49.044104 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fgszg" podStartSLOduration=2.431254442 podStartE2EDuration="6.040128498s" podCreationTimestamp="2025-11-24 12:54:43 +0000 UTC" firstStartedPulling="2025-11-24 12:54:44.947596185 +0000 UTC m=+5895.878655824" lastFinishedPulling="2025-11-24 12:54:48.556470241 +0000 UTC m=+5899.487529880" observedRunningTime="2025-11-24 12:54:49.029357901 +0000 UTC m=+5899.960417550" watchObservedRunningTime="2025-11-24 12:54:49.040128498 +0000 UTC m=+5899.971188147" Nov 24 12:54:49 crc kubenswrapper[4678]: I1124 12:54:49.905213 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:54:49 crc kubenswrapper[4678]: E1124 12:54:49.905847 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 12:54:53 crc kubenswrapper[4678]: I1124 12:54:53.065282 4678 generic.go:334] "Generic (PLEG): container finished" podID="5edd32c6-e198-430f-a3c6-03e9cf79a912" containerID="fa823991aaf730ebebfd7dd10f89782c9af50faf651985b8c5d9bc707d175a68" exitCode=0 Nov 24 12:54:53 crc kubenswrapper[4678]: I1124 12:54:53.065342 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl2wg" event={"ID":"5edd32c6-e198-430f-a3c6-03e9cf79a912","Type":"ContainerDied","Data":"fa823991aaf730ebebfd7dd10f89782c9af50faf651985b8c5d9bc707d175a68"} Nov 24 12:54:53 crc kubenswrapper[4678]: I1124 12:54:53.493224 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:53 crc kubenswrapper[4678]: I1124 12:54:53.493565 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:54 crc kubenswrapper[4678]: I1124 12:54:54.122710 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:54 crc kubenswrapper[4678]: I1124 12:54:54.181153 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:54 crc kubenswrapper[4678]: I1124 12:54:54.875299 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgszg"] Nov 24 12:54:55 crc kubenswrapper[4678]: I1124 12:54:55.091887 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl2wg" event={"ID":"5edd32c6-e198-430f-a3c6-03e9cf79a912","Type":"ContainerStarted","Data":"d3b4f1533ce239a1b0fffca323125756ffb850992fc271e9b018b2638e891ea2"} Nov 24 12:54:55 crc kubenswrapper[4678]: I1124 12:54:55.113737 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hl2wg" podStartSLOduration=3.637339379 podStartE2EDuration="10.113721219s" podCreationTimestamp="2025-11-24 12:54:45 +0000 UTC" firstStartedPulling="2025-11-24 12:54:46.982006451 +0000 UTC m=+5897.913066090" lastFinishedPulling="2025-11-24 12:54:53.458388281 +0000 UTC m=+5904.389447930" observedRunningTime="2025-11-24 12:54:55.106490078 +0000 UTC m=+5906.037549717" watchObservedRunningTime="2025-11-24 12:54:55.113721219 +0000 UTC m=+5906.044780858" Nov 24 12:54:55 crc kubenswrapper[4678]: I1124 12:54:55.632738 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:54:55 crc kubenswrapper[4678]: I1124 12:54:55.633131 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:54:56 crc kubenswrapper[4678]: I1124 12:54:56.100539 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fgszg" podUID="4159a453-391a-459b-8d73-eb43ed4cbff3" containerName="registry-server" containerID="cri-o://e14fe075fe4b4f704f1fd390f25e952b88d52a2dbf19ad5c07d3db71a86e843f" gracePeriod=2 Nov 24 12:54:56 crc kubenswrapper[4678]: E1124 12:54:56.237514 4678 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4159a453_391a_459b_8d73_eb43ed4cbff3.slice/crio-conmon-e14fe075fe4b4f704f1fd390f25e952b88d52a2dbf19ad5c07d3db71a86e843f.scope\": RecentStats: unable to find data in memory cache]" Nov 24 12:54:56 crc kubenswrapper[4678]: I1124 12:54:56.682848 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hl2wg" podUID="5edd32c6-e198-430f-a3c6-03e9cf79a912" containerName="registry-server" probeResult="failure" output=< Nov 24 12:54:56 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 12:54:56 crc kubenswrapper[4678]: > Nov 24 12:54:56 crc kubenswrapper[4678]: I1124 12:54:56.793591 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:56 crc kubenswrapper[4678]: I1124 12:54:56.835172 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg8wp\" (UniqueName: \"kubernetes.io/projected/4159a453-391a-459b-8d73-eb43ed4cbff3-kube-api-access-rg8wp\") pod \"4159a453-391a-459b-8d73-eb43ed4cbff3\" (UID: \"4159a453-391a-459b-8d73-eb43ed4cbff3\") " Nov 24 12:54:56 crc kubenswrapper[4678]: I1124 12:54:56.835412 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4159a453-391a-459b-8d73-eb43ed4cbff3-utilities\") pod \"4159a453-391a-459b-8d73-eb43ed4cbff3\" (UID: \"4159a453-391a-459b-8d73-eb43ed4cbff3\") " Nov 24 12:54:56 crc kubenswrapper[4678]: I1124 12:54:56.835819 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4159a453-391a-459b-8d73-eb43ed4cbff3-catalog-content\") pod \"4159a453-391a-459b-8d73-eb43ed4cbff3\" (UID: \"4159a453-391a-459b-8d73-eb43ed4cbff3\") " Nov 24 12:54:56 crc kubenswrapper[4678]: I1124 12:54:56.836650 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4159a453-391a-459b-8d73-eb43ed4cbff3-utilities" (OuterVolumeSpecName: "utilities") pod "4159a453-391a-459b-8d73-eb43ed4cbff3" (UID: "4159a453-391a-459b-8d73-eb43ed4cbff3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:54:56 crc kubenswrapper[4678]: I1124 12:54:56.851402 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4159a453-391a-459b-8d73-eb43ed4cbff3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4159a453-391a-459b-8d73-eb43ed4cbff3" (UID: "4159a453-391a-459b-8d73-eb43ed4cbff3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:54:56 crc kubenswrapper[4678]: I1124 12:54:56.854792 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4159a453-391a-459b-8d73-eb43ed4cbff3-kube-api-access-rg8wp" (OuterVolumeSpecName: "kube-api-access-rg8wp") pod "4159a453-391a-459b-8d73-eb43ed4cbff3" (UID: "4159a453-391a-459b-8d73-eb43ed4cbff3"). InnerVolumeSpecName "kube-api-access-rg8wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:54:56 crc kubenswrapper[4678]: I1124 12:54:56.939031 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4159a453-391a-459b-8d73-eb43ed4cbff3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:54:56 crc kubenswrapper[4678]: I1124 12:54:56.939359 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg8wp\" (UniqueName: \"kubernetes.io/projected/4159a453-391a-459b-8d73-eb43ed4cbff3-kube-api-access-rg8wp\") on node \"crc\" DevicePath \"\"" Nov 24 12:54:56 crc kubenswrapper[4678]: I1124 12:54:56.939374 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4159a453-391a-459b-8d73-eb43ed4cbff3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:54:57 crc kubenswrapper[4678]: I1124 12:54:57.113911 4678 generic.go:334] "Generic (PLEG): container finished" podID="4159a453-391a-459b-8d73-eb43ed4cbff3" containerID="e14fe075fe4b4f704f1fd390f25e952b88d52a2dbf19ad5c07d3db71a86e843f" exitCode=0 Nov 24 12:54:57 crc kubenswrapper[4678]: I1124 12:54:57.113960 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgszg" event={"ID":"4159a453-391a-459b-8d73-eb43ed4cbff3","Type":"ContainerDied","Data":"e14fe075fe4b4f704f1fd390f25e952b88d52a2dbf19ad5c07d3db71a86e843f"} Nov 24 12:54:57 crc kubenswrapper[4678]: I1124 12:54:57.113999 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgszg" Nov 24 12:54:57 crc kubenswrapper[4678]: I1124 12:54:57.114030 4678 scope.go:117] "RemoveContainer" containerID="e14fe075fe4b4f704f1fd390f25e952b88d52a2dbf19ad5c07d3db71a86e843f" Nov 24 12:54:57 crc kubenswrapper[4678]: I1124 12:54:57.114013 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgszg" event={"ID":"4159a453-391a-459b-8d73-eb43ed4cbff3","Type":"ContainerDied","Data":"0ad82a80ad1868eb5475c23decb81a909d2c3bb1df57e6ca18a680f79e4bd328"} Nov 24 12:54:57 crc kubenswrapper[4678]: I1124 12:54:57.135824 4678 scope.go:117] "RemoveContainer" containerID="09f321ffeba8c4067afbc44fe6da25b3c0159b087978637931de7cbe88f047a7" Nov 24 12:54:57 crc kubenswrapper[4678]: I1124 12:54:57.152597 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgszg"] Nov 24 12:54:57 crc kubenswrapper[4678]: I1124 12:54:57.166281 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgszg"] Nov 24 12:54:57 crc kubenswrapper[4678]: I1124 12:54:57.173960 4678 scope.go:117] "RemoveContainer" containerID="386e7a947f3837d769d91e9dc93eb2e5774c4f806e131780e8edd7af0cfe094c" Nov 24 12:54:57 crc kubenswrapper[4678]: I1124 12:54:57.214129 4678 scope.go:117] "RemoveContainer" containerID="e14fe075fe4b4f704f1fd390f25e952b88d52a2dbf19ad5c07d3db71a86e843f" Nov 24 12:54:57 crc kubenswrapper[4678]: E1124 12:54:57.218617 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e14fe075fe4b4f704f1fd390f25e952b88d52a2dbf19ad5c07d3db71a86e843f\": container with ID starting with e14fe075fe4b4f704f1fd390f25e952b88d52a2dbf19ad5c07d3db71a86e843f not found: ID does not exist" containerID="e14fe075fe4b4f704f1fd390f25e952b88d52a2dbf19ad5c07d3db71a86e843f" Nov 24 12:54:57 crc kubenswrapper[4678]: I1124 12:54:57.218689 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e14fe075fe4b4f704f1fd390f25e952b88d52a2dbf19ad5c07d3db71a86e843f"} err="failed to get container status \"e14fe075fe4b4f704f1fd390f25e952b88d52a2dbf19ad5c07d3db71a86e843f\": rpc error: code = NotFound desc = could not find container \"e14fe075fe4b4f704f1fd390f25e952b88d52a2dbf19ad5c07d3db71a86e843f\": container with ID starting with e14fe075fe4b4f704f1fd390f25e952b88d52a2dbf19ad5c07d3db71a86e843f not found: ID does not exist" Nov 24 12:54:57 crc kubenswrapper[4678]: I1124 12:54:57.218720 4678 scope.go:117] "RemoveContainer" containerID="09f321ffeba8c4067afbc44fe6da25b3c0159b087978637931de7cbe88f047a7" Nov 24 12:54:57 crc kubenswrapper[4678]: E1124 12:54:57.219186 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09f321ffeba8c4067afbc44fe6da25b3c0159b087978637931de7cbe88f047a7\": container with ID starting with 09f321ffeba8c4067afbc44fe6da25b3c0159b087978637931de7cbe88f047a7 not found: ID does not exist" containerID="09f321ffeba8c4067afbc44fe6da25b3c0159b087978637931de7cbe88f047a7" Nov 24 12:54:57 crc kubenswrapper[4678]: I1124 12:54:57.219230 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09f321ffeba8c4067afbc44fe6da25b3c0159b087978637931de7cbe88f047a7"} err="failed to get container status \"09f321ffeba8c4067afbc44fe6da25b3c0159b087978637931de7cbe88f047a7\": rpc error: code = NotFound desc = could not find container \"09f321ffeba8c4067afbc44fe6da25b3c0159b087978637931de7cbe88f047a7\": container with ID starting with 09f321ffeba8c4067afbc44fe6da25b3c0159b087978637931de7cbe88f047a7 not found: ID does not exist" Nov 24 12:54:57 crc kubenswrapper[4678]: I1124 12:54:57.219255 4678 scope.go:117] "RemoveContainer" containerID="386e7a947f3837d769d91e9dc93eb2e5774c4f806e131780e8edd7af0cfe094c" Nov 24 12:54:57 crc kubenswrapper[4678]: E1124 12:54:57.219528 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"386e7a947f3837d769d91e9dc93eb2e5774c4f806e131780e8edd7af0cfe094c\": container with ID starting with 386e7a947f3837d769d91e9dc93eb2e5774c4f806e131780e8edd7af0cfe094c not found: ID does not exist" containerID="386e7a947f3837d769d91e9dc93eb2e5774c4f806e131780e8edd7af0cfe094c" Nov 24 12:54:57 crc kubenswrapper[4678]: I1124 12:54:57.219561 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"386e7a947f3837d769d91e9dc93eb2e5774c4f806e131780e8edd7af0cfe094c"} err="failed to get container status \"386e7a947f3837d769d91e9dc93eb2e5774c4f806e131780e8edd7af0cfe094c\": rpc error: code = NotFound desc = could not find container \"386e7a947f3837d769d91e9dc93eb2e5774c4f806e131780e8edd7af0cfe094c\": container with ID starting with 386e7a947f3837d769d91e9dc93eb2e5774c4f806e131780e8edd7af0cfe094c not found: ID does not exist" Nov 24 12:54:57 crc kubenswrapper[4678]: I1124 12:54:57.907959 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4159a453-391a-459b-8d73-eb43ed4cbff3" path="/var/lib/kubelet/pods/4159a453-391a-459b-8d73-eb43ed4cbff3/volumes" Nov 24 12:55:01 crc kubenswrapper[4678]: I1124 12:55:01.895454 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:55:03 crc kubenswrapper[4678]: I1124 12:55:03.192306 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"73c581ee7cce6b381caa43bd1131c151f15863e02fb6b2474d937500276d7568"} Nov 24 12:55:05 crc kubenswrapper[4678]: I1124 12:55:05.701608 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:55:05 crc kubenswrapper[4678]: I1124 12:55:05.763341 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:55:05 crc kubenswrapper[4678]: I1124 12:55:05.943894 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hl2wg"] Nov 24 12:55:07 crc kubenswrapper[4678]: I1124 12:55:07.239771 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hl2wg" podUID="5edd32c6-e198-430f-a3c6-03e9cf79a912" containerName="registry-server" containerID="cri-o://d3b4f1533ce239a1b0fffca323125756ffb850992fc271e9b018b2638e891ea2" gracePeriod=2 Nov 24 12:55:08 crc kubenswrapper[4678]: I1124 12:55:08.252466 4678 generic.go:334] "Generic (PLEG): container finished" podID="5edd32c6-e198-430f-a3c6-03e9cf79a912" containerID="d3b4f1533ce239a1b0fffca323125756ffb850992fc271e9b018b2638e891ea2" exitCode=0 Nov 24 12:55:08 crc kubenswrapper[4678]: I1124 12:55:08.252523 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl2wg" event={"ID":"5edd32c6-e198-430f-a3c6-03e9cf79a912","Type":"ContainerDied","Data":"d3b4f1533ce239a1b0fffca323125756ffb850992fc271e9b018b2638e891ea2"} Nov 24 12:55:08 crc kubenswrapper[4678]: I1124 12:55:08.427889 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:55:08 crc kubenswrapper[4678]: I1124 12:55:08.497591 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pkns\" (UniqueName: \"kubernetes.io/projected/5edd32c6-e198-430f-a3c6-03e9cf79a912-kube-api-access-8pkns\") pod \"5edd32c6-e198-430f-a3c6-03e9cf79a912\" (UID: \"5edd32c6-e198-430f-a3c6-03e9cf79a912\") " Nov 24 12:55:08 crc kubenswrapper[4678]: I1124 12:55:08.498019 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5edd32c6-e198-430f-a3c6-03e9cf79a912-catalog-content\") pod \"5edd32c6-e198-430f-a3c6-03e9cf79a912\" (UID: \"5edd32c6-e198-430f-a3c6-03e9cf79a912\") " Nov 24 12:55:08 crc kubenswrapper[4678]: I1124 12:55:08.498155 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5edd32c6-e198-430f-a3c6-03e9cf79a912-utilities\") pod \"5edd32c6-e198-430f-a3c6-03e9cf79a912\" (UID: \"5edd32c6-e198-430f-a3c6-03e9cf79a912\") " Nov 24 12:55:08 crc kubenswrapper[4678]: I1124 12:55:08.498922 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5edd32c6-e198-430f-a3c6-03e9cf79a912-utilities" (OuterVolumeSpecName: "utilities") pod "5edd32c6-e198-430f-a3c6-03e9cf79a912" (UID: "5edd32c6-e198-430f-a3c6-03e9cf79a912"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:55:08 crc kubenswrapper[4678]: I1124 12:55:08.505348 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5edd32c6-e198-430f-a3c6-03e9cf79a912-kube-api-access-8pkns" (OuterVolumeSpecName: "kube-api-access-8pkns") pod "5edd32c6-e198-430f-a3c6-03e9cf79a912" (UID: "5edd32c6-e198-430f-a3c6-03e9cf79a912"). InnerVolumeSpecName "kube-api-access-8pkns". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:55:08 crc kubenswrapper[4678]: I1124 12:55:08.552087 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5edd32c6-e198-430f-a3c6-03e9cf79a912-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5edd32c6-e198-430f-a3c6-03e9cf79a912" (UID: "5edd32c6-e198-430f-a3c6-03e9cf79a912"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:55:08 crc kubenswrapper[4678]: I1124 12:55:08.600981 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pkns\" (UniqueName: \"kubernetes.io/projected/5edd32c6-e198-430f-a3c6-03e9cf79a912-kube-api-access-8pkns\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:08 crc kubenswrapper[4678]: I1124 12:55:08.601012 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5edd32c6-e198-430f-a3c6-03e9cf79a912-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:08 crc kubenswrapper[4678]: I1124 12:55:08.601022 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5edd32c6-e198-430f-a3c6-03e9cf79a912-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:09 crc kubenswrapper[4678]: I1124 12:55:09.265480 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl2wg" event={"ID":"5edd32c6-e198-430f-a3c6-03e9cf79a912","Type":"ContainerDied","Data":"34898f3eaf8b73f8b35b3ca12e44fad526d5e061efc83258311df637a568291b"} Nov 24 12:55:09 crc kubenswrapper[4678]: I1124 12:55:09.265524 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hl2wg" Nov 24 12:55:09 crc kubenswrapper[4678]: I1124 12:55:09.265557 4678 scope.go:117] "RemoveContainer" containerID="d3b4f1533ce239a1b0fffca323125756ffb850992fc271e9b018b2638e891ea2" Nov 24 12:55:09 crc kubenswrapper[4678]: I1124 12:55:09.293803 4678 scope.go:117] "RemoveContainer" containerID="fa823991aaf730ebebfd7dd10f89782c9af50faf651985b8c5d9bc707d175a68" Nov 24 12:55:09 crc kubenswrapper[4678]: I1124 12:55:09.301703 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hl2wg"] Nov 24 12:55:09 crc kubenswrapper[4678]: I1124 12:55:09.311780 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hl2wg"] Nov 24 12:55:09 crc kubenswrapper[4678]: I1124 12:55:09.341240 4678 scope.go:117] "RemoveContainer" containerID="6924e392814aafe24478a486fbc818dea7fafadfd49beadb12fc3caf23dc9033" Nov 24 12:55:09 crc kubenswrapper[4678]: I1124 12:55:09.908032 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5edd32c6-e198-430f-a3c6-03e9cf79a912" path="/var/lib/kubelet/pods/5edd32c6-e198-430f-a3c6-03e9cf79a912/volumes" Nov 24 12:57:30 crc kubenswrapper[4678]: I1124 12:57:30.296723 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:57:30 crc kubenswrapper[4678]: I1124 12:57:30.297389 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:57:51 crc kubenswrapper[4678]: I1124 12:57:51.700125 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-74f7b98495-b5gj8" podUID="95ada9de-2ac2-4ea9-9d4d-0ef4293da59f" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 24 12:58:00 crc kubenswrapper[4678]: I1124 12:58:00.296773 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:58:00 crc kubenswrapper[4678]: I1124 12:58:00.297894 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:58:30 crc kubenswrapper[4678]: I1124 12:58:30.297191 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:58:30 crc kubenswrapper[4678]: I1124 12:58:30.298363 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:58:30 crc kubenswrapper[4678]: I1124 12:58:30.298463 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 12:58:30 crc kubenswrapper[4678]: I1124 12:58:30.300105 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73c581ee7cce6b381caa43bd1131c151f15863e02fb6b2474d937500276d7568"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:58:30 crc kubenswrapper[4678]: I1124 12:58:30.300262 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://73c581ee7cce6b381caa43bd1131c151f15863e02fb6b2474d937500276d7568" gracePeriod=600 Nov 24 12:58:30 crc kubenswrapper[4678]: I1124 12:58:30.663737 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="73c581ee7cce6b381caa43bd1131c151f15863e02fb6b2474d937500276d7568" exitCode=0 Nov 24 12:58:30 crc kubenswrapper[4678]: I1124 12:58:30.663848 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"73c581ee7cce6b381caa43bd1131c151f15863e02fb6b2474d937500276d7568"} Nov 24 12:58:30 crc kubenswrapper[4678]: I1124 12:58:30.664289 4678 scope.go:117] "RemoveContainer" containerID="371a65822455e461f2f633a4182d8510566ec503b557dee00e92eed9f1569d87" Nov 24 12:58:31 crc kubenswrapper[4678]: I1124 12:58:31.682635 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497"} Nov 24 12:59:50 crc kubenswrapper[4678]: I1124 12:59:50.984463 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r4n2c"] Nov 24 12:59:50 crc kubenswrapper[4678]: E1124 12:59:50.986604 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4159a453-391a-459b-8d73-eb43ed4cbff3" containerName="extract-utilities" Nov 24 12:59:50 crc kubenswrapper[4678]: I1124 12:59:50.986686 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="4159a453-391a-459b-8d73-eb43ed4cbff3" containerName="extract-utilities" Nov 24 12:59:50 crc kubenswrapper[4678]: E1124 12:59:50.986703 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4159a453-391a-459b-8d73-eb43ed4cbff3" containerName="extract-content" Nov 24 12:59:50 crc kubenswrapper[4678]: I1124 12:59:50.986713 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="4159a453-391a-459b-8d73-eb43ed4cbff3" containerName="extract-content" Nov 24 12:59:50 crc kubenswrapper[4678]: E1124 12:59:50.986740 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5edd32c6-e198-430f-a3c6-03e9cf79a912" containerName="extract-content" Nov 24 12:59:50 crc kubenswrapper[4678]: I1124 12:59:50.986750 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5edd32c6-e198-430f-a3c6-03e9cf79a912" containerName="extract-content" Nov 24 12:59:50 crc kubenswrapper[4678]: E1124 12:59:50.986767 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4159a453-391a-459b-8d73-eb43ed4cbff3" containerName="registry-server" Nov 24 12:59:50 crc kubenswrapper[4678]: I1124 12:59:50.986774 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="4159a453-391a-459b-8d73-eb43ed4cbff3" containerName="registry-server" Nov 24 12:59:50 crc kubenswrapper[4678]: E1124 12:59:50.986788 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5edd32c6-e198-430f-a3c6-03e9cf79a912" containerName="registry-server" Nov 24 12:59:50 crc kubenswrapper[4678]: I1124 12:59:50.986796 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5edd32c6-e198-430f-a3c6-03e9cf79a912" containerName="registry-server" Nov 24 12:59:50 crc kubenswrapper[4678]: E1124 12:59:50.986819 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5edd32c6-e198-430f-a3c6-03e9cf79a912" containerName="extract-utilities" Nov 24 12:59:50 crc kubenswrapper[4678]: I1124 12:59:50.986828 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="5edd32c6-e198-430f-a3c6-03e9cf79a912" containerName="extract-utilities" Nov 24 12:59:50 crc kubenswrapper[4678]: I1124 12:59:50.987118 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="4159a453-391a-459b-8d73-eb43ed4cbff3" containerName="registry-server" Nov 24 12:59:50 crc kubenswrapper[4678]: I1124 12:59:50.987163 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="5edd32c6-e198-430f-a3c6-03e9cf79a912" containerName="registry-server" Nov 24 12:59:50 crc kubenswrapper[4678]: I1124 12:59:50.989708 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 12:59:50 crc kubenswrapper[4678]: I1124 12:59:50.997300 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r4n2c"] Nov 24 12:59:51 crc kubenswrapper[4678]: I1124 12:59:51.119256 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gphz\" (UniqueName: \"kubernetes.io/projected/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-kube-api-access-6gphz\") pod \"certified-operators-r4n2c\" (UID: \"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a\") " pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 12:59:51 crc kubenswrapper[4678]: I1124 12:59:51.119852 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-utilities\") pod \"certified-operators-r4n2c\" (UID: \"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a\") " pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 12:59:51 crc kubenswrapper[4678]: I1124 12:59:51.119912 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-catalog-content\") pod \"certified-operators-r4n2c\" (UID: \"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a\") " pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 12:59:51 crc kubenswrapper[4678]: I1124 12:59:51.223065 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gphz\" (UniqueName: \"kubernetes.io/projected/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-kube-api-access-6gphz\") pod \"certified-operators-r4n2c\" (UID: \"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a\") " pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 12:59:51 crc kubenswrapper[4678]: I1124 12:59:51.223231 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-utilities\") pod \"certified-operators-r4n2c\" (UID: \"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a\") " pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 12:59:51 crc kubenswrapper[4678]: I1124 12:59:51.223273 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-catalog-content\") pod \"certified-operators-r4n2c\" (UID: \"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a\") " pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 12:59:51 crc kubenswrapper[4678]: I1124 12:59:51.224000 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-catalog-content\") pod \"certified-operators-r4n2c\" (UID: \"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a\") " pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 12:59:51 crc kubenswrapper[4678]: I1124 12:59:51.224743 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-utilities\") pod \"certified-operators-r4n2c\" (UID: \"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a\") " pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 12:59:51 crc kubenswrapper[4678]: I1124 12:59:51.245964 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gphz\" (UniqueName: \"kubernetes.io/projected/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-kube-api-access-6gphz\") pod \"certified-operators-r4n2c\" (UID: \"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a\") " pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 12:59:51 crc kubenswrapper[4678]: I1124 12:59:51.318869 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 12:59:52 crc kubenswrapper[4678]: I1124 12:59:52.514314 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r4n2c"] Nov 24 12:59:52 crc kubenswrapper[4678]: I1124 12:59:52.567974 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4n2c" event={"ID":"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a","Type":"ContainerStarted","Data":"7eb791315a423d550841144c0608fbc66e203fbb7c7aee5080757b8c20ff99f2"} Nov 24 12:59:53 crc kubenswrapper[4678]: I1124 12:59:53.580736 4678 generic.go:334] "Generic (PLEG): container finished" podID="cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" containerID="c8d1db9aa7827547a796eba0bd930c11c30ebc13cde9251b2f5e29fde2a8e434" exitCode=0 Nov 24 12:59:53 crc kubenswrapper[4678]: I1124 12:59:53.580817 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4n2c" event={"ID":"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a","Type":"ContainerDied","Data":"c8d1db9aa7827547a796eba0bd930c11c30ebc13cde9251b2f5e29fde2a8e434"} Nov 24 12:59:53 crc kubenswrapper[4678]: I1124 12:59:53.583705 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:59:54 crc kubenswrapper[4678]: I1124 12:59:54.598370 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4n2c" event={"ID":"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a","Type":"ContainerStarted","Data":"af6da4ab87a4be2a072f42236142ed605917aa6537efa3fb6e86942df2e96a83"} Nov 24 12:59:56 crc kubenswrapper[4678]: I1124 12:59:56.176186 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fzmwn"] Nov 24 12:59:56 crc kubenswrapper[4678]: I1124 12:59:56.179961 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzmwn" Nov 24 12:59:56 crc kubenswrapper[4678]: I1124 12:59:56.190373 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzmwn"] Nov 24 12:59:56 crc kubenswrapper[4678]: I1124 12:59:56.265413 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c0d2913-b328-4661-8434-5e053b49589f-utilities\") pod \"redhat-operators-fzmwn\" (UID: \"8c0d2913-b328-4661-8434-5e053b49589f\") " pod="openshift-marketplace/redhat-operators-fzmwn" Nov 24 12:59:56 crc kubenswrapper[4678]: I1124 12:59:56.266017 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgwwd\" (UniqueName: \"kubernetes.io/projected/8c0d2913-b328-4661-8434-5e053b49589f-kube-api-access-dgwwd\") pod \"redhat-operators-fzmwn\" (UID: \"8c0d2913-b328-4661-8434-5e053b49589f\") " pod="openshift-marketplace/redhat-operators-fzmwn" Nov 24 12:59:56 crc kubenswrapper[4678]: I1124 12:59:56.266148 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c0d2913-b328-4661-8434-5e053b49589f-catalog-content\") pod \"redhat-operators-fzmwn\" (UID: \"8c0d2913-b328-4661-8434-5e053b49589f\") " pod="openshift-marketplace/redhat-operators-fzmwn" Nov 24 12:59:56 crc kubenswrapper[4678]: I1124 12:59:56.368255 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgwwd\" (UniqueName: \"kubernetes.io/projected/8c0d2913-b328-4661-8434-5e053b49589f-kube-api-access-dgwwd\") pod \"redhat-operators-fzmwn\" (UID: \"8c0d2913-b328-4661-8434-5e053b49589f\") " pod="openshift-marketplace/redhat-operators-fzmwn" Nov 24 12:59:56 crc kubenswrapper[4678]: I1124 12:59:56.368310 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c0d2913-b328-4661-8434-5e053b49589f-catalog-content\") pod \"redhat-operators-fzmwn\" (UID: \"8c0d2913-b328-4661-8434-5e053b49589f\") " pod="openshift-marketplace/redhat-operators-fzmwn" Nov 24 12:59:56 crc kubenswrapper[4678]: I1124 12:59:56.368384 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c0d2913-b328-4661-8434-5e053b49589f-utilities\") pod \"redhat-operators-fzmwn\" (UID: \"8c0d2913-b328-4661-8434-5e053b49589f\") " pod="openshift-marketplace/redhat-operators-fzmwn" Nov 24 12:59:56 crc kubenswrapper[4678]: I1124 12:59:56.368875 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c0d2913-b328-4661-8434-5e053b49589f-utilities\") pod \"redhat-operators-fzmwn\" (UID: \"8c0d2913-b328-4661-8434-5e053b49589f\") " pod="openshift-marketplace/redhat-operators-fzmwn" Nov 24 12:59:56 crc kubenswrapper[4678]: I1124 12:59:56.370364 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c0d2913-b328-4661-8434-5e053b49589f-catalog-content\") pod \"redhat-operators-fzmwn\" (UID: \"8c0d2913-b328-4661-8434-5e053b49589f\") " pod="openshift-marketplace/redhat-operators-fzmwn" Nov 24 12:59:56 crc kubenswrapper[4678]: I1124 12:59:56.401192 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgwwd\" (UniqueName: \"kubernetes.io/projected/8c0d2913-b328-4661-8434-5e053b49589f-kube-api-access-dgwwd\") pod \"redhat-operators-fzmwn\" (UID: \"8c0d2913-b328-4661-8434-5e053b49589f\") " pod="openshift-marketplace/redhat-operators-fzmwn" Nov 24 12:59:56 crc kubenswrapper[4678]: I1124 12:59:56.527911 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzmwn" Nov 24 12:59:56 crc kubenswrapper[4678]: I1124 12:59:56.619780 4678 generic.go:334] "Generic (PLEG): container finished" podID="cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" containerID="af6da4ab87a4be2a072f42236142ed605917aa6537efa3fb6e86942df2e96a83" exitCode=0 Nov 24 12:59:56 crc kubenswrapper[4678]: I1124 12:59:56.619831 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4n2c" event={"ID":"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a","Type":"ContainerDied","Data":"af6da4ab87a4be2a072f42236142ed605917aa6537efa3fb6e86942df2e96a83"} Nov 24 12:59:57 crc kubenswrapper[4678]: I1124 12:59:57.040176 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzmwn"] Nov 24 12:59:57 crc kubenswrapper[4678]: I1124 12:59:57.632536 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4n2c" event={"ID":"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a","Type":"ContainerStarted","Data":"e2eba0512014e6453c1163c48df206b382d9eb2ea901022eecb2b34480b7ab5d"} Nov 24 12:59:57 crc kubenswrapper[4678]: I1124 12:59:57.634410 4678 generic.go:334] "Generic (PLEG): container finished" podID="8c0d2913-b328-4661-8434-5e053b49589f" containerID="82207aa8bd8016fb3f32a79ed261eb829ff0a7462ad457501f501218ebef7ecc" exitCode=0 Nov 24 12:59:57 crc kubenswrapper[4678]: I1124 12:59:57.634467 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzmwn" event={"ID":"8c0d2913-b328-4661-8434-5e053b49589f","Type":"ContainerDied","Data":"82207aa8bd8016fb3f32a79ed261eb829ff0a7462ad457501f501218ebef7ecc"} Nov 24 12:59:57 crc kubenswrapper[4678]: I1124 12:59:57.634506 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzmwn" event={"ID":"8c0d2913-b328-4661-8434-5e053b49589f","Type":"ContainerStarted","Data":"b65ffca81aa5dc211da516e48d4a6741620715c7b2d043539296cc661fa817d5"} Nov 24 12:59:57 crc kubenswrapper[4678]: I1124 12:59:57.656487 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r4n2c" podStartSLOduration=4.227831077 podStartE2EDuration="7.656467443s" podCreationTimestamp="2025-11-24 12:59:50 +0000 UTC" firstStartedPulling="2025-11-24 12:59:53.583428644 +0000 UTC m=+6204.514488283" lastFinishedPulling="2025-11-24 12:59:57.01206501 +0000 UTC m=+6207.943124649" observedRunningTime="2025-11-24 12:59:57.650031092 +0000 UTC m=+6208.581090731" watchObservedRunningTime="2025-11-24 12:59:57.656467443 +0000 UTC m=+6208.587527082" Nov 24 13:00:00 crc kubenswrapper[4678]: I1124 13:00:00.220741 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql"] Nov 24 13:00:00 crc kubenswrapper[4678]: I1124 13:00:00.223917 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" Nov 24 13:00:00 crc kubenswrapper[4678]: I1124 13:00:00.238058 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql"] Nov 24 13:00:00 crc kubenswrapper[4678]: I1124 13:00:00.242418 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 13:00:00 crc kubenswrapper[4678]: I1124 13:00:00.242461 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 13:00:00 crc kubenswrapper[4678]: I1124 13:00:00.316337 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9946313-48a8-43a9-9ff0-56fcb26991a7-secret-volume\") pod \"collect-profiles-29399820-vnpql\" (UID: \"d9946313-48a8-43a9-9ff0-56fcb26991a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" Nov 24 13:00:00 crc kubenswrapper[4678]: I1124 13:00:00.316523 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkwmh\" (UniqueName: \"kubernetes.io/projected/d9946313-48a8-43a9-9ff0-56fcb26991a7-kube-api-access-tkwmh\") pod \"collect-profiles-29399820-vnpql\" (UID: \"d9946313-48a8-43a9-9ff0-56fcb26991a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" Nov 24 13:00:00 crc kubenswrapper[4678]: I1124 13:00:00.316615 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9946313-48a8-43a9-9ff0-56fcb26991a7-config-volume\") pod \"collect-profiles-29399820-vnpql\" (UID: \"d9946313-48a8-43a9-9ff0-56fcb26991a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" Nov 24 13:00:00 crc kubenswrapper[4678]: I1124 13:00:00.418939 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9946313-48a8-43a9-9ff0-56fcb26991a7-secret-volume\") pod \"collect-profiles-29399820-vnpql\" (UID: \"d9946313-48a8-43a9-9ff0-56fcb26991a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" Nov 24 13:00:00 crc kubenswrapper[4678]: I1124 13:00:00.419054 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkwmh\" (UniqueName: \"kubernetes.io/projected/d9946313-48a8-43a9-9ff0-56fcb26991a7-kube-api-access-tkwmh\") pod \"collect-profiles-29399820-vnpql\" (UID: \"d9946313-48a8-43a9-9ff0-56fcb26991a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" Nov 24 13:00:00 crc kubenswrapper[4678]: I1124 13:00:00.419138 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9946313-48a8-43a9-9ff0-56fcb26991a7-config-volume\") pod \"collect-profiles-29399820-vnpql\" (UID: \"d9946313-48a8-43a9-9ff0-56fcb26991a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" Nov 24 13:00:00 crc kubenswrapper[4678]: I1124 13:00:00.420186 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9946313-48a8-43a9-9ff0-56fcb26991a7-config-volume\") pod \"collect-profiles-29399820-vnpql\" (UID: \"d9946313-48a8-43a9-9ff0-56fcb26991a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" Nov 24 13:00:00 crc kubenswrapper[4678]: I1124 13:00:00.427972 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9946313-48a8-43a9-9ff0-56fcb26991a7-secret-volume\") pod \"collect-profiles-29399820-vnpql\" (UID: \"d9946313-48a8-43a9-9ff0-56fcb26991a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" Nov 24 13:00:00 crc kubenswrapper[4678]: I1124 13:00:00.436252 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkwmh\" (UniqueName: \"kubernetes.io/projected/d9946313-48a8-43a9-9ff0-56fcb26991a7-kube-api-access-tkwmh\") pod \"collect-profiles-29399820-vnpql\" (UID: \"d9946313-48a8-43a9-9ff0-56fcb26991a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" Nov 24 13:00:00 crc kubenswrapper[4678]: I1124 13:00:00.576902 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" Nov 24 13:00:01 crc kubenswrapper[4678]: I1124 13:00:01.130181 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql"] Nov 24 13:00:01 crc kubenswrapper[4678]: I1124 13:00:01.319304 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 13:00:01 crc kubenswrapper[4678]: I1124 13:00:01.319374 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 13:00:01 crc kubenswrapper[4678]: I1124 13:00:01.702889 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" event={"ID":"d9946313-48a8-43a9-9ff0-56fcb26991a7","Type":"ContainerStarted","Data":"b7ff63808602d6f07a8ef08ed8a9173dc8737dfea2ca5b755790ac4771faa565"} Nov 24 13:00:01 crc kubenswrapper[4678]: I1124 13:00:01.703292 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" event={"ID":"d9946313-48a8-43a9-9ff0-56fcb26991a7","Type":"ContainerStarted","Data":"8129054b9cfecfa0f00e1c5cce6f696c5a40dba60e17b2dd612899836a57a612"} Nov 24 13:00:01 crc kubenswrapper[4678]: I1124 13:00:01.735419 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" podStartSLOduration=1.73540151 podStartE2EDuration="1.73540151s" podCreationTimestamp="2025-11-24 13:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:00:01.727505799 +0000 UTC m=+6212.658565448" watchObservedRunningTime="2025-11-24 13:00:01.73540151 +0000 UTC m=+6212.666461149" Nov 24 13:00:02 crc kubenswrapper[4678]: I1124 13:00:02.379325 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-r4n2c" podUID="cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" containerName="registry-server" probeResult="failure" output=< Nov 24 13:00:02 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 13:00:02 crc kubenswrapper[4678]: > Nov 24 13:00:02 crc kubenswrapper[4678]: I1124 13:00:02.728362 4678 generic.go:334] "Generic (PLEG): container finished" podID="d9946313-48a8-43a9-9ff0-56fcb26991a7" containerID="b7ff63808602d6f07a8ef08ed8a9173dc8737dfea2ca5b755790ac4771faa565" exitCode=0 Nov 24 13:00:02 crc kubenswrapper[4678]: I1124 13:00:02.728421 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" event={"ID":"d9946313-48a8-43a9-9ff0-56fcb26991a7","Type":"ContainerDied","Data":"b7ff63808602d6f07a8ef08ed8a9173dc8737dfea2ca5b755790ac4771faa565"} Nov 24 13:00:04 crc kubenswrapper[4678]: I1124 13:00:04.141446 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" Nov 24 13:00:04 crc kubenswrapper[4678]: I1124 13:00:04.239421 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9946313-48a8-43a9-9ff0-56fcb26991a7-config-volume\") pod \"d9946313-48a8-43a9-9ff0-56fcb26991a7\" (UID: \"d9946313-48a8-43a9-9ff0-56fcb26991a7\") " Nov 24 13:00:04 crc kubenswrapper[4678]: I1124 13:00:04.239590 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkwmh\" (UniqueName: \"kubernetes.io/projected/d9946313-48a8-43a9-9ff0-56fcb26991a7-kube-api-access-tkwmh\") pod \"d9946313-48a8-43a9-9ff0-56fcb26991a7\" (UID: \"d9946313-48a8-43a9-9ff0-56fcb26991a7\") " Nov 24 13:00:04 crc kubenswrapper[4678]: I1124 13:00:04.239760 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9946313-48a8-43a9-9ff0-56fcb26991a7-secret-volume\") pod \"d9946313-48a8-43a9-9ff0-56fcb26991a7\" (UID: \"d9946313-48a8-43a9-9ff0-56fcb26991a7\") " Nov 24 13:00:04 crc kubenswrapper[4678]: I1124 13:00:04.240390 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9946313-48a8-43a9-9ff0-56fcb26991a7-config-volume" (OuterVolumeSpecName: "config-volume") pod "d9946313-48a8-43a9-9ff0-56fcb26991a7" (UID: "d9946313-48a8-43a9-9ff0-56fcb26991a7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 13:00:04 crc kubenswrapper[4678]: I1124 13:00:04.240779 4678 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9946313-48a8-43a9-9ff0-56fcb26991a7-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:04 crc kubenswrapper[4678]: I1124 13:00:04.248968 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9946313-48a8-43a9-9ff0-56fcb26991a7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d9946313-48a8-43a9-9ff0-56fcb26991a7" (UID: "d9946313-48a8-43a9-9ff0-56fcb26991a7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:00:04 crc kubenswrapper[4678]: I1124 13:00:04.254821 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9946313-48a8-43a9-9ff0-56fcb26991a7-kube-api-access-tkwmh" (OuterVolumeSpecName: "kube-api-access-tkwmh") pod "d9946313-48a8-43a9-9ff0-56fcb26991a7" (UID: "d9946313-48a8-43a9-9ff0-56fcb26991a7"). InnerVolumeSpecName "kube-api-access-tkwmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:00:04 crc kubenswrapper[4678]: I1124 13:00:04.342830 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkwmh\" (UniqueName: \"kubernetes.io/projected/d9946313-48a8-43a9-9ff0-56fcb26991a7-kube-api-access-tkwmh\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:04 crc kubenswrapper[4678]: I1124 13:00:04.342874 4678 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9946313-48a8-43a9-9ff0-56fcb26991a7-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:04 crc kubenswrapper[4678]: I1124 13:00:04.756337 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" event={"ID":"d9946313-48a8-43a9-9ff0-56fcb26991a7","Type":"ContainerDied","Data":"8129054b9cfecfa0f00e1c5cce6f696c5a40dba60e17b2dd612899836a57a612"} Nov 24 13:00:04 crc kubenswrapper[4678]: I1124 13:00:04.756381 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8129054b9cfecfa0f00e1c5cce6f696c5a40dba60e17b2dd612899836a57a612" Nov 24 13:00:04 crc kubenswrapper[4678]: I1124 13:00:04.756467 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-vnpql" Nov 24 13:00:04 crc kubenswrapper[4678]: I1124 13:00:04.830571 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj"] Nov 24 13:00:04 crc kubenswrapper[4678]: I1124 13:00:04.840709 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-hjclj"] Nov 24 13:00:05 crc kubenswrapper[4678]: I1124 13:00:05.931430 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0a4ab5-38c5-44ee-b039-609c6a3589f4" path="/var/lib/kubelet/pods/2e0a4ab5-38c5-44ee-b039-609c6a3589f4/volumes" Nov 24 13:00:10 crc kubenswrapper[4678]: I1124 13:00:10.835414 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzmwn" event={"ID":"8c0d2913-b328-4661-8434-5e053b49589f","Type":"ContainerStarted","Data":"02091a72bae4b6dc5248e4a478ad82311c7f547981c2e1ea944a3fde9f823ade"} Nov 24 13:00:12 crc kubenswrapper[4678]: I1124 13:00:12.449592 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-r4n2c" podUID="cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" containerName="registry-server" probeResult="failure" output=< Nov 24 13:00:12 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 13:00:12 crc kubenswrapper[4678]: > Nov 24 13:00:14 crc kubenswrapper[4678]: I1124 13:00:14.909357 4678 generic.go:334] "Generic (PLEG): container finished" podID="8c0d2913-b328-4661-8434-5e053b49589f" containerID="02091a72bae4b6dc5248e4a478ad82311c7f547981c2e1ea944a3fde9f823ade" exitCode=0 Nov 24 13:00:14 crc kubenswrapper[4678]: I1124 13:00:14.909452 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzmwn" event={"ID":"8c0d2913-b328-4661-8434-5e053b49589f","Type":"ContainerDied","Data":"02091a72bae4b6dc5248e4a478ad82311c7f547981c2e1ea944a3fde9f823ade"} Nov 24 13:00:15 crc kubenswrapper[4678]: I1124 13:00:15.931879 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzmwn" event={"ID":"8c0d2913-b328-4661-8434-5e053b49589f","Type":"ContainerStarted","Data":"3ec49ecdf9e591c1f9ad0954e64771d9354c91b94a93c773c6c8cd831f7390cf"} Nov 24 13:00:15 crc kubenswrapper[4678]: I1124 13:00:15.952861 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fzmwn" podStartSLOduration=2.236368982 podStartE2EDuration="19.952841629s" podCreationTimestamp="2025-11-24 12:59:56 +0000 UTC" firstStartedPulling="2025-11-24 12:59:57.637923867 +0000 UTC m=+6208.568983506" lastFinishedPulling="2025-11-24 13:00:15.354396524 +0000 UTC m=+6226.285456153" observedRunningTime="2025-11-24 13:00:15.946479268 +0000 UTC m=+6226.877538907" watchObservedRunningTime="2025-11-24 13:00:15.952841629 +0000 UTC m=+6226.883901268" Nov 24 13:00:16 crc kubenswrapper[4678]: I1124 13:00:16.528197 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fzmwn" Nov 24 13:00:16 crc kubenswrapper[4678]: I1124 13:00:16.528261 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fzmwn" Nov 24 13:00:17 crc kubenswrapper[4678]: I1124 13:00:17.580450 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fzmwn" podUID="8c0d2913-b328-4661-8434-5e053b49589f" containerName="registry-server" probeResult="failure" output=< Nov 24 13:00:17 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 13:00:17 crc kubenswrapper[4678]: > Nov 24 13:00:21 crc kubenswrapper[4678]: I1124 13:00:21.379660 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 13:00:21 crc kubenswrapper[4678]: I1124 13:00:21.440909 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 13:00:22 crc kubenswrapper[4678]: I1124 13:00:22.184717 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r4n2c"] Nov 24 13:00:23 crc kubenswrapper[4678]: I1124 13:00:23.002900 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r4n2c" podUID="cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" containerName="registry-server" containerID="cri-o://e2eba0512014e6453c1163c48df206b382d9eb2ea901022eecb2b34480b7ab5d" gracePeriod=2 Nov 24 13:00:23 crc kubenswrapper[4678]: I1124 13:00:23.702001 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 13:00:23 crc kubenswrapper[4678]: I1124 13:00:23.745739 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gphz\" (UniqueName: \"kubernetes.io/projected/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-kube-api-access-6gphz\") pod \"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a\" (UID: \"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a\") " Nov 24 13:00:23 crc kubenswrapper[4678]: I1124 13:00:23.746448 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-catalog-content\") pod \"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a\" (UID: \"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a\") " Nov 24 13:00:23 crc kubenswrapper[4678]: I1124 13:00:23.747126 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-utilities\") pod \"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a\" (UID: \"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a\") " Nov 24 13:00:23 crc kubenswrapper[4678]: I1124 13:00:23.749205 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-utilities" (OuterVolumeSpecName: "utilities") pod "cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" (UID: "cbd16f25-6c59-4ff1-ba96-6b945fa59f2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:00:23 crc kubenswrapper[4678]: I1124 13:00:23.761713 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-kube-api-access-6gphz" (OuterVolumeSpecName: "kube-api-access-6gphz") pod "cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" (UID: "cbd16f25-6c59-4ff1-ba96-6b945fa59f2a"). InnerVolumeSpecName "kube-api-access-6gphz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:00:23 crc kubenswrapper[4678]: I1124 13:00:23.808516 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" (UID: "cbd16f25-6c59-4ff1-ba96-6b945fa59f2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:00:23 crc kubenswrapper[4678]: I1124 13:00:23.851442 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:23 crc kubenswrapper[4678]: I1124 13:00:23.851481 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:23 crc kubenswrapper[4678]: I1124 13:00:23.851494 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gphz\" (UniqueName: \"kubernetes.io/projected/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a-kube-api-access-6gphz\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:24 crc kubenswrapper[4678]: I1124 13:00:24.016123 4678 generic.go:334] "Generic (PLEG): container finished" podID="cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" containerID="e2eba0512014e6453c1163c48df206b382d9eb2ea901022eecb2b34480b7ab5d" exitCode=0 Nov 24 13:00:24 crc kubenswrapper[4678]: I1124 13:00:24.016364 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4n2c" event={"ID":"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a","Type":"ContainerDied","Data":"e2eba0512014e6453c1163c48df206b382d9eb2ea901022eecb2b34480b7ab5d"} Nov 24 13:00:24 crc kubenswrapper[4678]: I1124 13:00:24.016525 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4n2c" event={"ID":"cbd16f25-6c59-4ff1-ba96-6b945fa59f2a","Type":"ContainerDied","Data":"7eb791315a423d550841144c0608fbc66e203fbb7c7aee5080757b8c20ff99f2"} Nov 24 13:00:24 crc kubenswrapper[4678]: I1124 13:00:24.016556 4678 scope.go:117] "RemoveContainer" containerID="e2eba0512014e6453c1163c48df206b382d9eb2ea901022eecb2b34480b7ab5d" Nov 24 13:00:24 crc kubenswrapper[4678]: I1124 13:00:24.016443 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4n2c" Nov 24 13:00:24 crc kubenswrapper[4678]: I1124 13:00:24.053625 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r4n2c"] Nov 24 13:00:24 crc kubenswrapper[4678]: I1124 13:00:24.056552 4678 scope.go:117] "RemoveContainer" containerID="af6da4ab87a4be2a072f42236142ed605917aa6537efa3fb6e86942df2e96a83" Nov 24 13:00:24 crc kubenswrapper[4678]: I1124 13:00:24.068953 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r4n2c"] Nov 24 13:00:24 crc kubenswrapper[4678]: I1124 13:00:24.096830 4678 scope.go:117] "RemoveContainer" containerID="c8d1db9aa7827547a796eba0bd930c11c30ebc13cde9251b2f5e29fde2a8e434" Nov 24 13:00:24 crc kubenswrapper[4678]: I1124 13:00:24.158193 4678 scope.go:117] "RemoveContainer" containerID="e2eba0512014e6453c1163c48df206b382d9eb2ea901022eecb2b34480b7ab5d" Nov 24 13:00:24 crc kubenswrapper[4678]: E1124 13:00:24.160066 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2eba0512014e6453c1163c48df206b382d9eb2ea901022eecb2b34480b7ab5d\": container with ID starting with e2eba0512014e6453c1163c48df206b382d9eb2ea901022eecb2b34480b7ab5d not found: ID does not exist" containerID="e2eba0512014e6453c1163c48df206b382d9eb2ea901022eecb2b34480b7ab5d" Nov 24 13:00:24 crc kubenswrapper[4678]: I1124 13:00:24.160116 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2eba0512014e6453c1163c48df206b382d9eb2ea901022eecb2b34480b7ab5d"} err="failed to get container status \"e2eba0512014e6453c1163c48df206b382d9eb2ea901022eecb2b34480b7ab5d\": rpc error: code = NotFound desc = could not find container \"e2eba0512014e6453c1163c48df206b382d9eb2ea901022eecb2b34480b7ab5d\": container with ID starting with e2eba0512014e6453c1163c48df206b382d9eb2ea901022eecb2b34480b7ab5d not found: ID does not exist" Nov 24 13:00:24 crc kubenswrapper[4678]: I1124 13:00:24.160146 4678 scope.go:117] "RemoveContainer" containerID="af6da4ab87a4be2a072f42236142ed605917aa6537efa3fb6e86942df2e96a83" Nov 24 13:00:24 crc kubenswrapper[4678]: E1124 13:00:24.160638 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af6da4ab87a4be2a072f42236142ed605917aa6537efa3fb6e86942df2e96a83\": container with ID starting with af6da4ab87a4be2a072f42236142ed605917aa6537efa3fb6e86942df2e96a83 not found: ID does not exist" containerID="af6da4ab87a4be2a072f42236142ed605917aa6537efa3fb6e86942df2e96a83" Nov 24 13:00:24 crc kubenswrapper[4678]: I1124 13:00:24.160759 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af6da4ab87a4be2a072f42236142ed605917aa6537efa3fb6e86942df2e96a83"} err="failed to get container status \"af6da4ab87a4be2a072f42236142ed605917aa6537efa3fb6e86942df2e96a83\": rpc error: code = NotFound desc = could not find container \"af6da4ab87a4be2a072f42236142ed605917aa6537efa3fb6e86942df2e96a83\": container with ID starting with af6da4ab87a4be2a072f42236142ed605917aa6537efa3fb6e86942df2e96a83 not found: ID does not exist" Nov 24 13:00:24 crc kubenswrapper[4678]: I1124 13:00:24.160888 4678 scope.go:117] "RemoveContainer" containerID="c8d1db9aa7827547a796eba0bd930c11c30ebc13cde9251b2f5e29fde2a8e434" Nov 24 13:00:24 crc kubenswrapper[4678]: E1124 13:00:24.161702 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8d1db9aa7827547a796eba0bd930c11c30ebc13cde9251b2f5e29fde2a8e434\": container with ID starting with c8d1db9aa7827547a796eba0bd930c11c30ebc13cde9251b2f5e29fde2a8e434 not found: ID does not exist" containerID="c8d1db9aa7827547a796eba0bd930c11c30ebc13cde9251b2f5e29fde2a8e434" Nov 24 13:00:24 crc kubenswrapper[4678]: I1124 13:00:24.161774 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d1db9aa7827547a796eba0bd930c11c30ebc13cde9251b2f5e29fde2a8e434"} err="failed to get container status \"c8d1db9aa7827547a796eba0bd930c11c30ebc13cde9251b2f5e29fde2a8e434\": rpc error: code = NotFound desc = could not find container \"c8d1db9aa7827547a796eba0bd930c11c30ebc13cde9251b2f5e29fde2a8e434\": container with ID starting with c8d1db9aa7827547a796eba0bd930c11c30ebc13cde9251b2f5e29fde2a8e434 not found: ID does not exist" Nov 24 13:00:25 crc kubenswrapper[4678]: I1124 13:00:25.912971 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" path="/var/lib/kubelet/pods/cbd16f25-6c59-4ff1-ba96-6b945fa59f2a/volumes" Nov 24 13:00:27 crc kubenswrapper[4678]: I1124 13:00:27.581652 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fzmwn" podUID="8c0d2913-b328-4661-8434-5e053b49589f" containerName="registry-server" probeResult="failure" output=< Nov 24 13:00:27 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 13:00:27 crc kubenswrapper[4678]: > Nov 24 13:00:30 crc kubenswrapper[4678]: I1124 13:00:30.297507 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:00:30 crc kubenswrapper[4678]: I1124 13:00:30.297585 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:00:36 crc kubenswrapper[4678]: I1124 13:00:36.590266 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fzmwn" Nov 24 13:00:36 crc kubenswrapper[4678]: I1124 13:00:36.647491 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fzmwn" Nov 24 13:00:36 crc kubenswrapper[4678]: I1124 13:00:36.752187 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzmwn"] Nov 24 13:00:36 crc kubenswrapper[4678]: I1124 13:00:36.839020 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gl86d"] Nov 24 13:00:36 crc kubenswrapper[4678]: I1124 13:00:36.840066 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gl86d" podUID="e8671688-d21a-471d-a7ef-aa87d927f001" containerName="registry-server" containerID="cri-o://ba18c21817ba37cd975492b8047acaccfa6f9684ef94839f6f6c5c0b05845a72" gracePeriod=2 Nov 24 13:00:37 crc kubenswrapper[4678]: I1124 13:00:37.239177 4678 generic.go:334] "Generic (PLEG): container finished" podID="e8671688-d21a-471d-a7ef-aa87d927f001" containerID="ba18c21817ba37cd975492b8047acaccfa6f9684ef94839f6f6c5c0b05845a72" exitCode=0 Nov 24 13:00:37 crc kubenswrapper[4678]: I1124 13:00:37.239931 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl86d" event={"ID":"e8671688-d21a-471d-a7ef-aa87d927f001","Type":"ContainerDied","Data":"ba18c21817ba37cd975492b8047acaccfa6f9684ef94839f6f6c5c0b05845a72"} Nov 24 13:00:37 crc kubenswrapper[4678]: I1124 13:00:37.441411 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 13:00:37 crc kubenswrapper[4678]: I1124 13:00:37.552410 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8671688-d21a-471d-a7ef-aa87d927f001-utilities\") pod \"e8671688-d21a-471d-a7ef-aa87d927f001\" (UID: \"e8671688-d21a-471d-a7ef-aa87d927f001\") " Nov 24 13:00:37 crc kubenswrapper[4678]: I1124 13:00:37.552592 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8671688-d21a-471d-a7ef-aa87d927f001-catalog-content\") pod \"e8671688-d21a-471d-a7ef-aa87d927f001\" (UID: \"e8671688-d21a-471d-a7ef-aa87d927f001\") " Nov 24 13:00:37 crc kubenswrapper[4678]: I1124 13:00:37.552765 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7psz8\" (UniqueName: \"kubernetes.io/projected/e8671688-d21a-471d-a7ef-aa87d927f001-kube-api-access-7psz8\") pod \"e8671688-d21a-471d-a7ef-aa87d927f001\" (UID: \"e8671688-d21a-471d-a7ef-aa87d927f001\") " Nov 24 13:00:37 crc kubenswrapper[4678]: I1124 13:00:37.554196 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8671688-d21a-471d-a7ef-aa87d927f001-utilities" (OuterVolumeSpecName: "utilities") pod "e8671688-d21a-471d-a7ef-aa87d927f001" (UID: "e8671688-d21a-471d-a7ef-aa87d927f001"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:00:37 crc kubenswrapper[4678]: I1124 13:00:37.560991 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8671688-d21a-471d-a7ef-aa87d927f001-kube-api-access-7psz8" (OuterVolumeSpecName: "kube-api-access-7psz8") pod "e8671688-d21a-471d-a7ef-aa87d927f001" (UID: "e8671688-d21a-471d-a7ef-aa87d927f001"). InnerVolumeSpecName "kube-api-access-7psz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:00:37 crc kubenswrapper[4678]: I1124 13:00:37.655106 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8671688-d21a-471d-a7ef-aa87d927f001-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:37 crc kubenswrapper[4678]: I1124 13:00:37.655147 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7psz8\" (UniqueName: \"kubernetes.io/projected/e8671688-d21a-471d-a7ef-aa87d927f001-kube-api-access-7psz8\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:37 crc kubenswrapper[4678]: I1124 13:00:37.705850 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8671688-d21a-471d-a7ef-aa87d927f001-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8671688-d21a-471d-a7ef-aa87d927f001" (UID: "e8671688-d21a-471d-a7ef-aa87d927f001"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:00:37 crc kubenswrapper[4678]: I1124 13:00:37.757212 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8671688-d21a-471d-a7ef-aa87d927f001-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:38 crc kubenswrapper[4678]: I1124 13:00:38.252200 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl86d" event={"ID":"e8671688-d21a-471d-a7ef-aa87d927f001","Type":"ContainerDied","Data":"d656272cc109a6db0414d2780b84da5cdae25ecc78dfb11b14ea1c18be57bcca"} Nov 24 13:00:38 crc kubenswrapper[4678]: I1124 13:00:38.252526 4678 scope.go:117] "RemoveContainer" containerID="ba18c21817ba37cd975492b8047acaccfa6f9684ef94839f6f6c5c0b05845a72" Nov 24 13:00:38 crc kubenswrapper[4678]: I1124 13:00:38.252379 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gl86d" Nov 24 13:00:38 crc kubenswrapper[4678]: I1124 13:00:38.279167 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gl86d"] Nov 24 13:00:38 crc kubenswrapper[4678]: I1124 13:00:38.282890 4678 scope.go:117] "RemoveContainer" containerID="b31e628eee9d6bfd9180a0dd55b1faa6ee8931fc95d3cfb45820d5fd8623d0fa" Nov 24 13:00:38 crc kubenswrapper[4678]: I1124 13:00:38.289929 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gl86d"] Nov 24 13:00:38 crc kubenswrapper[4678]: I1124 13:00:38.340230 4678 scope.go:117] "RemoveContainer" containerID="111d277ea07d7e7e7af168eefae147fa643e986b444a55e5643a705908bc6870" Nov 24 13:00:39 crc kubenswrapper[4678]: I1124 13:00:39.913390 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8671688-d21a-471d-a7ef-aa87d927f001" path="/var/lib/kubelet/pods/e8671688-d21a-471d-a7ef-aa87d927f001/volumes" Nov 24 13:00:40 crc kubenswrapper[4678]: I1124 13:00:40.376778 4678 scope.go:117] "RemoveContainer" containerID="03a7f6a18116ef5ce44d4b8a06c75ac3aac41b5e650dab83e01b1c38bbd55bc5" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.151522 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29399821-b756t"] Nov 24 13:01:00 crc kubenswrapper[4678]: E1124 13:01:00.152598 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" containerName="extract-utilities" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.152616 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" containerName="extract-utilities" Nov 24 13:01:00 crc kubenswrapper[4678]: E1124 13:01:00.152640 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" containerName="extract-content" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.152649 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" containerName="extract-content" Nov 24 13:01:00 crc kubenswrapper[4678]: E1124 13:01:00.152662 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9946313-48a8-43a9-9ff0-56fcb26991a7" containerName="collect-profiles" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.152877 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9946313-48a8-43a9-9ff0-56fcb26991a7" containerName="collect-profiles" Nov 24 13:01:00 crc kubenswrapper[4678]: E1124 13:01:00.152895 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8671688-d21a-471d-a7ef-aa87d927f001" containerName="registry-server" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.152902 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8671688-d21a-471d-a7ef-aa87d927f001" containerName="registry-server" Nov 24 13:01:00 crc kubenswrapper[4678]: E1124 13:01:00.152912 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8671688-d21a-471d-a7ef-aa87d927f001" containerName="extract-utilities" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.152919 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8671688-d21a-471d-a7ef-aa87d927f001" containerName="extract-utilities" Nov 24 13:01:00 crc kubenswrapper[4678]: E1124 13:01:00.152944 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" containerName="registry-server" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.152953 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" containerName="registry-server" Nov 24 13:01:00 crc kubenswrapper[4678]: E1124 13:01:00.152981 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8671688-d21a-471d-a7ef-aa87d927f001" containerName="extract-content" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.152988 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8671688-d21a-471d-a7ef-aa87d927f001" containerName="extract-content" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.153204 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8671688-d21a-471d-a7ef-aa87d927f001" containerName="registry-server" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.153232 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9946313-48a8-43a9-9ff0-56fcb26991a7" containerName="collect-profiles" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.153253 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd16f25-6c59-4ff1-ba96-6b945fa59f2a" containerName="registry-server" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.154116 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399821-b756t" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.173580 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399821-b756t"] Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.296920 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.297307 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.338087 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-config-data\") pod \"keystone-cron-29399821-b756t\" (UID: \"907db682-c7c3-459d-8030-295f0d16951b\") " pod="openstack/keystone-cron-29399821-b756t" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.338178 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-fernet-keys\") pod \"keystone-cron-29399821-b756t\" (UID: \"907db682-c7c3-459d-8030-295f0d16951b\") " pod="openstack/keystone-cron-29399821-b756t" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.338215 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mrkn\" (UniqueName: \"kubernetes.io/projected/907db682-c7c3-459d-8030-295f0d16951b-kube-api-access-6mrkn\") pod \"keystone-cron-29399821-b756t\" (UID: \"907db682-c7c3-459d-8030-295f0d16951b\") " pod="openstack/keystone-cron-29399821-b756t" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.338243 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-combined-ca-bundle\") pod \"keystone-cron-29399821-b756t\" (UID: \"907db682-c7c3-459d-8030-295f0d16951b\") " pod="openstack/keystone-cron-29399821-b756t" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.440813 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mrkn\" (UniqueName: \"kubernetes.io/projected/907db682-c7c3-459d-8030-295f0d16951b-kube-api-access-6mrkn\") pod \"keystone-cron-29399821-b756t\" (UID: \"907db682-c7c3-459d-8030-295f0d16951b\") " pod="openstack/keystone-cron-29399821-b756t" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.440869 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-combined-ca-bundle\") pod \"keystone-cron-29399821-b756t\" (UID: \"907db682-c7c3-459d-8030-295f0d16951b\") " pod="openstack/keystone-cron-29399821-b756t" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.441039 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-config-data\") pod \"keystone-cron-29399821-b756t\" (UID: \"907db682-c7c3-459d-8030-295f0d16951b\") " pod="openstack/keystone-cron-29399821-b756t" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.441112 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-fernet-keys\") pod \"keystone-cron-29399821-b756t\" (UID: \"907db682-c7c3-459d-8030-295f0d16951b\") " pod="openstack/keystone-cron-29399821-b756t" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.450202 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-config-data\") pod \"keystone-cron-29399821-b756t\" (UID: \"907db682-c7c3-459d-8030-295f0d16951b\") " pod="openstack/keystone-cron-29399821-b756t" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.450401 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-combined-ca-bundle\") pod \"keystone-cron-29399821-b756t\" (UID: \"907db682-c7c3-459d-8030-295f0d16951b\") " pod="openstack/keystone-cron-29399821-b756t" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.461534 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-fernet-keys\") pod \"keystone-cron-29399821-b756t\" (UID: \"907db682-c7c3-459d-8030-295f0d16951b\") " pod="openstack/keystone-cron-29399821-b756t" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.474430 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mrkn\" (UniqueName: \"kubernetes.io/projected/907db682-c7c3-459d-8030-295f0d16951b-kube-api-access-6mrkn\") pod \"keystone-cron-29399821-b756t\" (UID: \"907db682-c7c3-459d-8030-295f0d16951b\") " pod="openstack/keystone-cron-29399821-b756t" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.487336 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399821-b756t" Nov 24 13:01:00 crc kubenswrapper[4678]: I1124 13:01:00.962892 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399821-b756t"] Nov 24 13:01:01 crc kubenswrapper[4678]: I1124 13:01:01.534575 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399821-b756t" event={"ID":"907db682-c7c3-459d-8030-295f0d16951b","Type":"ContainerStarted","Data":"428d5ac7bf6487aea16cf33d8a686d61e2460a8a78a4f629a6986e566b3878fa"} Nov 24 13:01:01 crc kubenswrapper[4678]: I1124 13:01:01.534636 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399821-b756t" event={"ID":"907db682-c7c3-459d-8030-295f0d16951b","Type":"ContainerStarted","Data":"0c6f7a6cbd11e974ecf05d858387e1e9803661ce0e753f49b251d3be41920b07"} Nov 24 13:01:01 crc kubenswrapper[4678]: I1124 13:01:01.552219 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29399821-b756t" podStartSLOduration=1.552195532 podStartE2EDuration="1.552195532s" podCreationTimestamp="2025-11-24 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:01:01.549702935 +0000 UTC m=+6272.480762574" watchObservedRunningTime="2025-11-24 13:01:01.552195532 +0000 UTC m=+6272.483255521" Nov 24 13:01:05 crc kubenswrapper[4678]: I1124 13:01:05.588384 4678 generic.go:334] "Generic (PLEG): container finished" podID="907db682-c7c3-459d-8030-295f0d16951b" containerID="428d5ac7bf6487aea16cf33d8a686d61e2460a8a78a4f629a6986e566b3878fa" exitCode=0 Nov 24 13:01:05 crc kubenswrapper[4678]: I1124 13:01:05.588463 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399821-b756t" event={"ID":"907db682-c7c3-459d-8030-295f0d16951b","Type":"ContainerDied","Data":"428d5ac7bf6487aea16cf33d8a686d61e2460a8a78a4f629a6986e566b3878fa"} Nov 24 13:01:07 crc kubenswrapper[4678]: I1124 13:01:07.024485 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399821-b756t" Nov 24 13:01:07 crc kubenswrapper[4678]: I1124 13:01:07.117213 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-combined-ca-bundle\") pod \"907db682-c7c3-459d-8030-295f0d16951b\" (UID: \"907db682-c7c3-459d-8030-295f0d16951b\") " Nov 24 13:01:07 crc kubenswrapper[4678]: I1124 13:01:07.117389 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mrkn\" (UniqueName: \"kubernetes.io/projected/907db682-c7c3-459d-8030-295f0d16951b-kube-api-access-6mrkn\") pod \"907db682-c7c3-459d-8030-295f0d16951b\" (UID: \"907db682-c7c3-459d-8030-295f0d16951b\") " Nov 24 13:01:07 crc kubenswrapper[4678]: I1124 13:01:07.117418 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-config-data\") pod \"907db682-c7c3-459d-8030-295f0d16951b\" (UID: \"907db682-c7c3-459d-8030-295f0d16951b\") " Nov 24 13:01:07 crc kubenswrapper[4678]: I1124 13:01:07.117489 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-fernet-keys\") pod \"907db682-c7c3-459d-8030-295f0d16951b\" (UID: \"907db682-c7c3-459d-8030-295f0d16951b\") " Nov 24 13:01:07 crc kubenswrapper[4678]: I1124 13:01:07.125466 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907db682-c7c3-459d-8030-295f0d16951b-kube-api-access-6mrkn" (OuterVolumeSpecName: "kube-api-access-6mrkn") pod "907db682-c7c3-459d-8030-295f0d16951b" (UID: "907db682-c7c3-459d-8030-295f0d16951b"). InnerVolumeSpecName "kube-api-access-6mrkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:01:07 crc kubenswrapper[4678]: I1124 13:01:07.127468 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "907db682-c7c3-459d-8030-295f0d16951b" (UID: "907db682-c7c3-459d-8030-295f0d16951b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:01:07 crc kubenswrapper[4678]: I1124 13:01:07.155344 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "907db682-c7c3-459d-8030-295f0d16951b" (UID: "907db682-c7c3-459d-8030-295f0d16951b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:01:07 crc kubenswrapper[4678]: I1124 13:01:07.187353 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-config-data" (OuterVolumeSpecName: "config-data") pod "907db682-c7c3-459d-8030-295f0d16951b" (UID: "907db682-c7c3-459d-8030-295f0d16951b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:01:07 crc kubenswrapper[4678]: I1124 13:01:07.222747 4678 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:07 crc kubenswrapper[4678]: I1124 13:01:07.222823 4678 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:07 crc kubenswrapper[4678]: I1124 13:01:07.222840 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mrkn\" (UniqueName: \"kubernetes.io/projected/907db682-c7c3-459d-8030-295f0d16951b-kube-api-access-6mrkn\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:07 crc kubenswrapper[4678]: I1124 13:01:07.222856 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/907db682-c7c3-459d-8030-295f0d16951b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:07 crc kubenswrapper[4678]: I1124 13:01:07.616025 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399821-b756t" event={"ID":"907db682-c7c3-459d-8030-295f0d16951b","Type":"ContainerDied","Data":"0c6f7a6cbd11e974ecf05d858387e1e9803661ce0e753f49b251d3be41920b07"} Nov 24 13:01:07 crc kubenswrapper[4678]: I1124 13:01:07.616080 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c6f7a6cbd11e974ecf05d858387e1e9803661ce0e753f49b251d3be41920b07" Nov 24 13:01:07 crc kubenswrapper[4678]: I1124 13:01:07.616114 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399821-b756t" Nov 24 13:01:30 crc kubenswrapper[4678]: I1124 13:01:30.296941 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:01:30 crc kubenswrapper[4678]: I1124 13:01:30.297508 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:01:30 crc kubenswrapper[4678]: I1124 13:01:30.297563 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 13:01:30 crc kubenswrapper[4678]: I1124 13:01:30.298715 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:01:30 crc kubenswrapper[4678]: I1124 13:01:30.298783 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" gracePeriod=600 Nov 24 13:01:30 crc kubenswrapper[4678]: E1124 13:01:30.424158 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:01:30 crc kubenswrapper[4678]: I1124 13:01:30.907916 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" exitCode=0 Nov 24 13:01:30 crc kubenswrapper[4678]: I1124 13:01:30.907981 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497"} Nov 24 13:01:30 crc kubenswrapper[4678]: I1124 13:01:30.908029 4678 scope.go:117] "RemoveContainer" containerID="73c581ee7cce6b381caa43bd1131c151f15863e02fb6b2474d937500276d7568" Nov 24 13:01:30 crc kubenswrapper[4678]: I1124 13:01:30.908806 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:01:30 crc kubenswrapper[4678]: E1124 13:01:30.909203 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:01:45 crc kubenswrapper[4678]: I1124 13:01:45.895836 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:01:45 crc kubenswrapper[4678]: E1124 13:01:45.896594 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:02:00 crc kubenswrapper[4678]: I1124 13:02:00.895608 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:02:00 crc kubenswrapper[4678]: E1124 13:02:00.896396 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:02:12 crc kubenswrapper[4678]: I1124 13:02:12.895649 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:02:12 crc kubenswrapper[4678]: E1124 13:02:12.896917 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:02:27 crc kubenswrapper[4678]: I1124 13:02:27.895229 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:02:27 crc kubenswrapper[4678]: E1124 13:02:27.896221 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:02:41 crc kubenswrapper[4678]: I1124 13:02:41.896599 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:02:41 crc kubenswrapper[4678]: E1124 13:02:41.900483 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:02:56 crc kubenswrapper[4678]: I1124 13:02:56.895955 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:02:56 crc kubenswrapper[4678]: E1124 13:02:56.896968 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:03:07 crc kubenswrapper[4678]: I1124 13:03:07.896211 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:03:07 crc kubenswrapper[4678]: E1124 13:03:07.897523 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:03:18 crc kubenswrapper[4678]: I1124 13:03:18.896517 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:03:18 crc kubenswrapper[4678]: E1124 13:03:18.897412 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:03:32 crc kubenswrapper[4678]: I1124 13:03:32.896099 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:03:32 crc kubenswrapper[4678]: E1124 13:03:32.897427 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:03:47 crc kubenswrapper[4678]: I1124 13:03:47.897592 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:03:47 crc kubenswrapper[4678]: E1124 13:03:47.899282 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:04:01 crc kubenswrapper[4678]: I1124 13:04:01.896147 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:04:01 crc kubenswrapper[4678]: E1124 13:04:01.898045 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:04:13 crc kubenswrapper[4678]: I1124 13:04:13.897190 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:04:13 crc kubenswrapper[4678]: E1124 13:04:13.898507 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:04:28 crc kubenswrapper[4678]: I1124 13:04:28.896904 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:04:28 crc kubenswrapper[4678]: E1124 13:04:28.897952 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:04:41 crc kubenswrapper[4678]: I1124 13:04:41.897049 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:04:41 crc kubenswrapper[4678]: E1124 13:04:41.897915 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:04:52 crc kubenswrapper[4678]: I1124 13:04:52.895891 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:04:52 crc kubenswrapper[4678]: E1124 13:04:52.898410 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:04:56 crc kubenswrapper[4678]: I1124 13:04:56.808425 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vks9f"] Nov 24 13:04:56 crc kubenswrapper[4678]: E1124 13:04:56.810230 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907db682-c7c3-459d-8030-295f0d16951b" containerName="keystone-cron" Nov 24 13:04:56 crc kubenswrapper[4678]: I1124 13:04:56.810253 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="907db682-c7c3-459d-8030-295f0d16951b" containerName="keystone-cron" Nov 24 13:04:56 crc kubenswrapper[4678]: I1124 13:04:56.810567 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="907db682-c7c3-459d-8030-295f0d16951b" containerName="keystone-cron" Nov 24 13:04:56 crc kubenswrapper[4678]: I1124 13:04:56.813524 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:04:56 crc kubenswrapper[4678]: I1124 13:04:56.840536 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56mvh\" (UniqueName: \"kubernetes.io/projected/8359392c-8dca-46dd-8db5-0724b4beb05e-kube-api-access-56mvh\") pod \"redhat-marketplace-vks9f\" (UID: \"8359392c-8dca-46dd-8db5-0724b4beb05e\") " pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:04:56 crc kubenswrapper[4678]: I1124 13:04:56.840601 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8359392c-8dca-46dd-8db5-0724b4beb05e-catalog-content\") pod \"redhat-marketplace-vks9f\" (UID: \"8359392c-8dca-46dd-8db5-0724b4beb05e\") " pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:04:56 crc kubenswrapper[4678]: I1124 13:04:56.840814 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8359392c-8dca-46dd-8db5-0724b4beb05e-utilities\") pod \"redhat-marketplace-vks9f\" (UID: \"8359392c-8dca-46dd-8db5-0724b4beb05e\") " pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:04:56 crc kubenswrapper[4678]: I1124 13:04:56.845941 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vks9f"] Nov 24 13:04:56 crc kubenswrapper[4678]: I1124 13:04:56.945293 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8359392c-8dca-46dd-8db5-0724b4beb05e-utilities\") pod \"redhat-marketplace-vks9f\" (UID: \"8359392c-8dca-46dd-8db5-0724b4beb05e\") " pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:04:56 crc kubenswrapper[4678]: I1124 13:04:56.945425 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56mvh\" (UniqueName: \"kubernetes.io/projected/8359392c-8dca-46dd-8db5-0724b4beb05e-kube-api-access-56mvh\") pod \"redhat-marketplace-vks9f\" (UID: \"8359392c-8dca-46dd-8db5-0724b4beb05e\") " pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:04:56 crc kubenswrapper[4678]: I1124 13:04:56.945460 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8359392c-8dca-46dd-8db5-0724b4beb05e-catalog-content\") pod \"redhat-marketplace-vks9f\" (UID: \"8359392c-8dca-46dd-8db5-0724b4beb05e\") " pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:04:56 crc kubenswrapper[4678]: I1124 13:04:56.949954 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8359392c-8dca-46dd-8db5-0724b4beb05e-utilities\") pod \"redhat-marketplace-vks9f\" (UID: \"8359392c-8dca-46dd-8db5-0724b4beb05e\") " pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:04:56 crc kubenswrapper[4678]: I1124 13:04:56.950255 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8359392c-8dca-46dd-8db5-0724b4beb05e-catalog-content\") pod \"redhat-marketplace-vks9f\" (UID: \"8359392c-8dca-46dd-8db5-0724b4beb05e\") " pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:04:56 crc kubenswrapper[4678]: I1124 13:04:56.980061 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56mvh\" (UniqueName: \"kubernetes.io/projected/8359392c-8dca-46dd-8db5-0724b4beb05e-kube-api-access-56mvh\") pod \"redhat-marketplace-vks9f\" (UID: \"8359392c-8dca-46dd-8db5-0724b4beb05e\") " pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:04:57 crc kubenswrapper[4678]: I1124 13:04:57.141444 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:04:57 crc kubenswrapper[4678]: I1124 13:04:57.647052 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vks9f"] Nov 24 13:04:58 crc kubenswrapper[4678]: I1124 13:04:58.210805 4678 generic.go:334] "Generic (PLEG): container finished" podID="8359392c-8dca-46dd-8db5-0724b4beb05e" containerID="b1245923dabc838aebdcc182bc1579068e4c7ad5099c574f647660fd90ab5c08" exitCode=0 Nov 24 13:04:58 crc kubenswrapper[4678]: I1124 13:04:58.210858 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vks9f" event={"ID":"8359392c-8dca-46dd-8db5-0724b4beb05e","Type":"ContainerDied","Data":"b1245923dabc838aebdcc182bc1579068e4c7ad5099c574f647660fd90ab5c08"} Nov 24 13:04:58 crc kubenswrapper[4678]: I1124 13:04:58.210888 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vks9f" event={"ID":"8359392c-8dca-46dd-8db5-0724b4beb05e","Type":"ContainerStarted","Data":"1477e27efd9ce985d4ba3cef285d5b027512c80443b10a9df41959c36b01b737"} Nov 24 13:04:58 crc kubenswrapper[4678]: I1124 13:04:58.213166 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 13:04:59 crc kubenswrapper[4678]: I1124 13:04:59.226663 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vks9f" event={"ID":"8359392c-8dca-46dd-8db5-0724b4beb05e","Type":"ContainerStarted","Data":"235cb971b6fe0ecb6a7df100ec4a89da180a6869f70228d616960432cd0aacea"} Nov 24 13:05:00 crc kubenswrapper[4678]: I1124 13:05:00.244613 4678 generic.go:334] "Generic (PLEG): container finished" podID="8359392c-8dca-46dd-8db5-0724b4beb05e" containerID="235cb971b6fe0ecb6a7df100ec4a89da180a6869f70228d616960432cd0aacea" exitCode=0 Nov 24 13:05:00 crc kubenswrapper[4678]: I1124 13:05:00.244747 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vks9f" event={"ID":"8359392c-8dca-46dd-8db5-0724b4beb05e","Type":"ContainerDied","Data":"235cb971b6fe0ecb6a7df100ec4a89da180a6869f70228d616960432cd0aacea"} Nov 24 13:05:01 crc kubenswrapper[4678]: I1124 13:05:01.257817 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vks9f" event={"ID":"8359392c-8dca-46dd-8db5-0724b4beb05e","Type":"ContainerStarted","Data":"155775365f518dda311d770cc3c258b7ee3ca50a017c6462436aa48178d406c4"} Nov 24 13:05:01 crc kubenswrapper[4678]: I1124 13:05:01.280290 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vks9f" podStartSLOduration=2.833753906 podStartE2EDuration="5.280214971s" podCreationTimestamp="2025-11-24 13:04:56 +0000 UTC" firstStartedPulling="2025-11-24 13:04:58.212826132 +0000 UTC m=+6509.143885771" lastFinishedPulling="2025-11-24 13:05:00.659287197 +0000 UTC m=+6511.590346836" observedRunningTime="2025-11-24 13:05:01.274243512 +0000 UTC m=+6512.205303161" watchObservedRunningTime="2025-11-24 13:05:01.280214971 +0000 UTC m=+6512.211274610" Nov 24 13:05:03 crc kubenswrapper[4678]: I1124 13:05:03.897094 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:05:03 crc kubenswrapper[4678]: E1124 13:05:03.900210 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:05:07 crc kubenswrapper[4678]: I1124 13:05:07.142403 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:05:07 crc kubenswrapper[4678]: I1124 13:05:07.143032 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:05:07 crc kubenswrapper[4678]: I1124 13:05:07.200537 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:05:07 crc kubenswrapper[4678]: I1124 13:05:07.384291 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:05:07 crc kubenswrapper[4678]: I1124 13:05:07.446762 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vks9f"] Nov 24 13:05:09 crc kubenswrapper[4678]: I1124 13:05:09.357798 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vks9f" podUID="8359392c-8dca-46dd-8db5-0724b4beb05e" containerName="registry-server" containerID="cri-o://155775365f518dda311d770cc3c258b7ee3ca50a017c6462436aa48178d406c4" gracePeriod=2 Nov 24 13:05:09 crc kubenswrapper[4678]: I1124 13:05:09.984720 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.082925 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8359392c-8dca-46dd-8db5-0724b4beb05e-utilities\") pod \"8359392c-8dca-46dd-8db5-0724b4beb05e\" (UID: \"8359392c-8dca-46dd-8db5-0724b4beb05e\") " Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.083468 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8359392c-8dca-46dd-8db5-0724b4beb05e-catalog-content\") pod \"8359392c-8dca-46dd-8db5-0724b4beb05e\" (UID: \"8359392c-8dca-46dd-8db5-0724b4beb05e\") " Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.083615 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56mvh\" (UniqueName: \"kubernetes.io/projected/8359392c-8dca-46dd-8db5-0724b4beb05e-kube-api-access-56mvh\") pod \"8359392c-8dca-46dd-8db5-0724b4beb05e\" (UID: \"8359392c-8dca-46dd-8db5-0724b4beb05e\") " Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.084184 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8359392c-8dca-46dd-8db5-0724b4beb05e-utilities" (OuterVolumeSpecName: "utilities") pod "8359392c-8dca-46dd-8db5-0724b4beb05e" (UID: "8359392c-8dca-46dd-8db5-0724b4beb05e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.085266 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8359392c-8dca-46dd-8db5-0724b4beb05e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.105171 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8359392c-8dca-46dd-8db5-0724b4beb05e-kube-api-access-56mvh" (OuterVolumeSpecName: "kube-api-access-56mvh") pod "8359392c-8dca-46dd-8db5-0724b4beb05e" (UID: "8359392c-8dca-46dd-8db5-0724b4beb05e"). InnerVolumeSpecName "kube-api-access-56mvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.106781 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8359392c-8dca-46dd-8db5-0724b4beb05e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8359392c-8dca-46dd-8db5-0724b4beb05e" (UID: "8359392c-8dca-46dd-8db5-0724b4beb05e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.188113 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56mvh\" (UniqueName: \"kubernetes.io/projected/8359392c-8dca-46dd-8db5-0724b4beb05e-kube-api-access-56mvh\") on node \"crc\" DevicePath \"\"" Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.188525 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8359392c-8dca-46dd-8db5-0724b4beb05e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.410188 4678 generic.go:334] "Generic (PLEG): container finished" podID="8359392c-8dca-46dd-8db5-0724b4beb05e" containerID="155775365f518dda311d770cc3c258b7ee3ca50a017c6462436aa48178d406c4" exitCode=0 Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.410246 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vks9f" event={"ID":"8359392c-8dca-46dd-8db5-0724b4beb05e","Type":"ContainerDied","Data":"155775365f518dda311d770cc3c258b7ee3ca50a017c6462436aa48178d406c4"} Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.410282 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vks9f" event={"ID":"8359392c-8dca-46dd-8db5-0724b4beb05e","Type":"ContainerDied","Data":"1477e27efd9ce985d4ba3cef285d5b027512c80443b10a9df41959c36b01b737"} Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.410303 4678 scope.go:117] "RemoveContainer" containerID="155775365f518dda311d770cc3c258b7ee3ca50a017c6462436aa48178d406c4" Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.410531 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vks9f" Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.441132 4678 scope.go:117] "RemoveContainer" containerID="235cb971b6fe0ecb6a7df100ec4a89da180a6869f70228d616960432cd0aacea" Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.478819 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vks9f"] Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.492136 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vks9f"] Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.494761 4678 scope.go:117] "RemoveContainer" containerID="b1245923dabc838aebdcc182bc1579068e4c7ad5099c574f647660fd90ab5c08" Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.538576 4678 scope.go:117] "RemoveContainer" containerID="155775365f518dda311d770cc3c258b7ee3ca50a017c6462436aa48178d406c4" Nov 24 13:05:10 crc kubenswrapper[4678]: E1124 13:05:10.539459 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"155775365f518dda311d770cc3c258b7ee3ca50a017c6462436aa48178d406c4\": container with ID starting with 155775365f518dda311d770cc3c258b7ee3ca50a017c6462436aa48178d406c4 not found: ID does not exist" containerID="155775365f518dda311d770cc3c258b7ee3ca50a017c6462436aa48178d406c4" Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.539542 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"155775365f518dda311d770cc3c258b7ee3ca50a017c6462436aa48178d406c4"} err="failed to get container status \"155775365f518dda311d770cc3c258b7ee3ca50a017c6462436aa48178d406c4\": rpc error: code = NotFound desc = could not find container \"155775365f518dda311d770cc3c258b7ee3ca50a017c6462436aa48178d406c4\": container with ID starting with 155775365f518dda311d770cc3c258b7ee3ca50a017c6462436aa48178d406c4 not found: ID does not exist" Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.539595 4678 scope.go:117] "RemoveContainer" containerID="235cb971b6fe0ecb6a7df100ec4a89da180a6869f70228d616960432cd0aacea" Nov 24 13:05:10 crc kubenswrapper[4678]: E1124 13:05:10.540248 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"235cb971b6fe0ecb6a7df100ec4a89da180a6869f70228d616960432cd0aacea\": container with ID starting with 235cb971b6fe0ecb6a7df100ec4a89da180a6869f70228d616960432cd0aacea not found: ID does not exist" containerID="235cb971b6fe0ecb6a7df100ec4a89da180a6869f70228d616960432cd0aacea" Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.540308 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"235cb971b6fe0ecb6a7df100ec4a89da180a6869f70228d616960432cd0aacea"} err="failed to get container status \"235cb971b6fe0ecb6a7df100ec4a89da180a6869f70228d616960432cd0aacea\": rpc error: code = NotFound desc = could not find container \"235cb971b6fe0ecb6a7df100ec4a89da180a6869f70228d616960432cd0aacea\": container with ID starting with 235cb971b6fe0ecb6a7df100ec4a89da180a6869f70228d616960432cd0aacea not found: ID does not exist" Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.540330 4678 scope.go:117] "RemoveContainer" containerID="b1245923dabc838aebdcc182bc1579068e4c7ad5099c574f647660fd90ab5c08" Nov 24 13:05:10 crc kubenswrapper[4678]: E1124 13:05:10.540929 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1245923dabc838aebdcc182bc1579068e4c7ad5099c574f647660fd90ab5c08\": container with ID starting with b1245923dabc838aebdcc182bc1579068e4c7ad5099c574f647660fd90ab5c08 not found: ID does not exist" containerID="b1245923dabc838aebdcc182bc1579068e4c7ad5099c574f647660fd90ab5c08" Nov 24 13:05:10 crc kubenswrapper[4678]: I1124 13:05:10.540991 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1245923dabc838aebdcc182bc1579068e4c7ad5099c574f647660fd90ab5c08"} err="failed to get container status \"b1245923dabc838aebdcc182bc1579068e4c7ad5099c574f647660fd90ab5c08\": rpc error: code = NotFound desc = could not find container \"b1245923dabc838aebdcc182bc1579068e4c7ad5099c574f647660fd90ab5c08\": container with ID starting with b1245923dabc838aebdcc182bc1579068e4c7ad5099c574f647660fd90ab5c08 not found: ID does not exist" Nov 24 13:05:10 crc kubenswrapper[4678]: E1124 13:05:10.652790 4678 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8359392c_8dca_46dd_8db5_0724b4beb05e.slice\": RecentStats: unable to find data in memory cache]" Nov 24 13:05:11 crc kubenswrapper[4678]: I1124 13:05:11.910894 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8359392c-8dca-46dd-8db5-0724b4beb05e" path="/var/lib/kubelet/pods/8359392c-8dca-46dd-8db5-0724b4beb05e/volumes" Nov 24 13:05:14 crc kubenswrapper[4678]: I1124 13:05:14.896349 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:05:14 crc kubenswrapper[4678]: E1124 13:05:14.897232 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:05:28 crc kubenswrapper[4678]: I1124 13:05:28.896394 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:05:28 crc kubenswrapper[4678]: E1124 13:05:28.897782 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:05:41 crc kubenswrapper[4678]: I1124 13:05:41.896631 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:05:41 crc kubenswrapper[4678]: E1124 13:05:41.897815 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:05:53 crc kubenswrapper[4678]: I1124 13:05:53.896856 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:05:53 crc kubenswrapper[4678]: E1124 13:05:53.898049 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:06:04 crc kubenswrapper[4678]: I1124 13:06:04.896688 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:06:04 crc kubenswrapper[4678]: E1124 13:06:04.898299 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:06:16 crc kubenswrapper[4678]: I1124 13:06:16.895871 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:06:16 crc kubenswrapper[4678]: E1124 13:06:16.899818 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:06:30 crc kubenswrapper[4678]: I1124 13:06:30.359809 4678 generic.go:334] "Generic (PLEG): container finished" podID="fa52a8b5-88fb-4f22-b067-edbdcee003ea" containerID="b328a9428be729c8687d35538da213c2ddeaaeb0521256ea48ae3a6152056db3" exitCode=0 Nov 24 13:06:30 crc kubenswrapper[4678]: I1124 13:06:30.360357 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fa52a8b5-88fb-4f22-b067-edbdcee003ea","Type":"ContainerDied","Data":"b328a9428be729c8687d35538da213c2ddeaaeb0521256ea48ae3a6152056db3"} Nov 24 13:06:30 crc kubenswrapper[4678]: I1124 13:06:30.897059 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:06:31 crc kubenswrapper[4678]: I1124 13:06:31.373622 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"1d49db0a3acb427f624097f22598b79529846e1454fe47b119a335df94a836cf"} Nov 24 13:06:31 crc kubenswrapper[4678]: I1124 13:06:31.805335 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 13:06:31 crc kubenswrapper[4678]: I1124 13:06:31.969056 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-openstack-config-secret\") pod \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " Nov 24 13:06:31 crc kubenswrapper[4678]: I1124 13:06:31.969113 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-ca-certs\") pod \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " Nov 24 13:06:31 crc kubenswrapper[4678]: I1124 13:06:31.969353 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-ssh-key\") pod \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " Nov 24 13:06:31 crc kubenswrapper[4678]: I1124 13:06:31.969409 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " Nov 24 13:06:31 crc kubenswrapper[4678]: I1124 13:06:31.969469 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fa52a8b5-88fb-4f22-b067-edbdcee003ea-test-operator-ephemeral-temporary\") pod \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " Nov 24 13:06:31 crc kubenswrapper[4678]: I1124 13:06:31.969515 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fa52a8b5-88fb-4f22-b067-edbdcee003ea-test-operator-ephemeral-workdir\") pod \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " Nov 24 13:06:31 crc kubenswrapper[4678]: I1124 13:06:31.969551 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa52a8b5-88fb-4f22-b067-edbdcee003ea-openstack-config\") pod \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " Nov 24 13:06:31 crc kubenswrapper[4678]: I1124 13:06:31.969662 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa52a8b5-88fb-4f22-b067-edbdcee003ea-config-data\") pod \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " Nov 24 13:06:31 crc kubenswrapper[4678]: I1124 13:06:31.969766 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v8rg\" (UniqueName: \"kubernetes.io/projected/fa52a8b5-88fb-4f22-b067-edbdcee003ea-kube-api-access-5v8rg\") pod \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\" (UID: \"fa52a8b5-88fb-4f22-b067-edbdcee003ea\") " Nov 24 13:06:31 crc kubenswrapper[4678]: I1124 13:06:31.975377 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa52a8b5-88fb-4f22-b067-edbdcee003ea-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "fa52a8b5-88fb-4f22-b067-edbdcee003ea" (UID: "fa52a8b5-88fb-4f22-b067-edbdcee003ea"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:06:31 crc kubenswrapper[4678]: I1124 13:06:31.977789 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa52a8b5-88fb-4f22-b067-edbdcee003ea-config-data" (OuterVolumeSpecName: "config-data") pod "fa52a8b5-88fb-4f22-b067-edbdcee003ea" (UID: "fa52a8b5-88fb-4f22-b067-edbdcee003ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 13:06:31 crc kubenswrapper[4678]: I1124 13:06:31.990859 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa52a8b5-88fb-4f22-b067-edbdcee003ea-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "fa52a8b5-88fb-4f22-b067-edbdcee003ea" (UID: "fa52a8b5-88fb-4f22-b067-edbdcee003ea"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.011130 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa52a8b5-88fb-4f22-b067-edbdcee003ea-kube-api-access-5v8rg" (OuterVolumeSpecName: "kube-api-access-5v8rg") pod "fa52a8b5-88fb-4f22-b067-edbdcee003ea" (UID: "fa52a8b5-88fb-4f22-b067-edbdcee003ea"). InnerVolumeSpecName "kube-api-access-5v8rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.011343 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "fa52a8b5-88fb-4f22-b067-edbdcee003ea" (UID: "fa52a8b5-88fb-4f22-b067-edbdcee003ea"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.034542 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "fa52a8b5-88fb-4f22-b067-edbdcee003ea" (UID: "fa52a8b5-88fb-4f22-b067-edbdcee003ea"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.039594 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fa52a8b5-88fb-4f22-b067-edbdcee003ea" (UID: "fa52a8b5-88fb-4f22-b067-edbdcee003ea"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.039643 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fa52a8b5-88fb-4f22-b067-edbdcee003ea" (UID: "fa52a8b5-88fb-4f22-b067-edbdcee003ea"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.082396 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa52a8b5-88fb-4f22-b067-edbdcee003ea-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fa52a8b5-88fb-4f22-b067-edbdcee003ea" (UID: "fa52a8b5-88fb-4f22-b067-edbdcee003ea"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.084926 4678 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.086528 4678 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.086592 4678 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fa52a8b5-88fb-4f22-b067-edbdcee003ea-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.086607 4678 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fa52a8b5-88fb-4f22-b067-edbdcee003ea-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.086618 4678 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa52a8b5-88fb-4f22-b067-edbdcee003ea-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.086628 4678 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa52a8b5-88fb-4f22-b067-edbdcee003ea-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.086637 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v8rg\" (UniqueName: \"kubernetes.io/projected/fa52a8b5-88fb-4f22-b067-edbdcee003ea-kube-api-access-5v8rg\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.086647 4678 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.086655 4678 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fa52a8b5-88fb-4f22-b067-edbdcee003ea-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.120434 4678 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.191919 4678 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.387060 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fa52a8b5-88fb-4f22-b067-edbdcee003ea","Type":"ContainerDied","Data":"4e27ceeb04a9cc39a0a72f2438eea255cff1dc74118105a7ca4c5aa5c281629a"} Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.387118 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e27ceeb04a9cc39a0a72f2438eea255cff1dc74118105a7ca4c5aa5c281629a" Nov 24 13:06:32 crc kubenswrapper[4678]: I1124 13:06:32.387430 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 13:06:32 crc kubenswrapper[4678]: E1124 13:06:32.474373 4678 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa52a8b5_88fb_4f22_b067_edbdcee003ea.slice/crio-4e27ceeb04a9cc39a0a72f2438eea255cff1dc74118105a7ca4c5aa5c281629a\": RecentStats: unable to find data in memory cache]" Nov 24 13:06:39 crc kubenswrapper[4678]: I1124 13:06:39.993329 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 13:06:39 crc kubenswrapper[4678]: E1124 13:06:39.995124 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa52a8b5-88fb-4f22-b067-edbdcee003ea" containerName="tempest-tests-tempest-tests-runner" Nov 24 13:06:39 crc kubenswrapper[4678]: I1124 13:06:39.995172 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa52a8b5-88fb-4f22-b067-edbdcee003ea" containerName="tempest-tests-tempest-tests-runner" Nov 24 13:06:39 crc kubenswrapper[4678]: E1124 13:06:39.995185 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8359392c-8dca-46dd-8db5-0724b4beb05e" containerName="extract-utilities" Nov 24 13:06:39 crc kubenswrapper[4678]: I1124 13:06:39.995192 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="8359392c-8dca-46dd-8db5-0724b4beb05e" containerName="extract-utilities" Nov 24 13:06:39 crc kubenswrapper[4678]: E1124 13:06:39.995214 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8359392c-8dca-46dd-8db5-0724b4beb05e" containerName="extract-content" Nov 24 13:06:39 crc kubenswrapper[4678]: I1124 13:06:39.995233 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="8359392c-8dca-46dd-8db5-0724b4beb05e" containerName="extract-content" Nov 24 13:06:39 crc kubenswrapper[4678]: E1124 13:06:39.995291 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8359392c-8dca-46dd-8db5-0724b4beb05e" containerName="registry-server" Nov 24 13:06:39 crc kubenswrapper[4678]: I1124 13:06:39.995297 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="8359392c-8dca-46dd-8db5-0724b4beb05e" containerName="registry-server" Nov 24 13:06:39 crc kubenswrapper[4678]: I1124 13:06:39.995570 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="8359392c-8dca-46dd-8db5-0724b4beb05e" containerName="registry-server" Nov 24 13:06:39 crc kubenswrapper[4678]: I1124 13:06:39.995601 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa52a8b5-88fb-4f22-b067-edbdcee003ea" containerName="tempest-tests-tempest-tests-runner" Nov 24 13:06:39 crc kubenswrapper[4678]: I1124 13:06:39.996841 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:06:39 crc kubenswrapper[4678]: I1124 13:06:39.999818 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fghgg" Nov 24 13:06:40 crc kubenswrapper[4678]: I1124 13:06:40.006931 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 13:06:40 crc kubenswrapper[4678]: I1124 13:06:40.181153 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3915bea2-2199-409f-b6f6-842f0b991f93\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:06:40 crc kubenswrapper[4678]: I1124 13:06:40.181413 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqbmf\" (UniqueName: \"kubernetes.io/projected/3915bea2-2199-409f-b6f6-842f0b991f93-kube-api-access-lqbmf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3915bea2-2199-409f-b6f6-842f0b991f93\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:06:40 crc kubenswrapper[4678]: I1124 13:06:40.284929 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqbmf\" (UniqueName: \"kubernetes.io/projected/3915bea2-2199-409f-b6f6-842f0b991f93-kube-api-access-lqbmf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3915bea2-2199-409f-b6f6-842f0b991f93\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:06:40 crc kubenswrapper[4678]: I1124 13:06:40.285032 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3915bea2-2199-409f-b6f6-842f0b991f93\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:06:40 crc kubenswrapper[4678]: I1124 13:06:40.287414 4678 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3915bea2-2199-409f-b6f6-842f0b991f93\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:06:40 crc kubenswrapper[4678]: I1124 13:06:40.328368 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqbmf\" (UniqueName: \"kubernetes.io/projected/3915bea2-2199-409f-b6f6-842f0b991f93-kube-api-access-lqbmf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3915bea2-2199-409f-b6f6-842f0b991f93\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:06:40 crc kubenswrapper[4678]: I1124 13:06:40.358917 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3915bea2-2199-409f-b6f6-842f0b991f93\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:06:40 crc kubenswrapper[4678]: I1124 13:06:40.628628 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:06:41 crc kubenswrapper[4678]: I1124 13:06:41.195582 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 13:06:41 crc kubenswrapper[4678]: I1124 13:06:41.485008 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3915bea2-2199-409f-b6f6-842f0b991f93","Type":"ContainerStarted","Data":"36a0000610a9b1a1ca5483fdcbcad50c1450ba78f2de12b77d9152713cafe0bc"} Nov 24 13:06:42 crc kubenswrapper[4678]: I1124 13:06:42.495781 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3915bea2-2199-409f-b6f6-842f0b991f93","Type":"ContainerStarted","Data":"2500e308b3619612385e6ceda9b34e07fc0ba25ff2c16a187f1fcbaf8d48c466"} Nov 24 13:06:42 crc kubenswrapper[4678]: I1124 13:06:42.529062 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.55944209 podStartE2EDuration="3.529037304s" podCreationTimestamp="2025-11-24 13:06:39 +0000 UTC" firstStartedPulling="2025-11-24 13:06:41.206505716 +0000 UTC m=+6612.137565355" lastFinishedPulling="2025-11-24 13:06:42.17610093 +0000 UTC m=+6613.107160569" observedRunningTime="2025-11-24 13:06:42.512358358 +0000 UTC m=+6613.443418007" watchObservedRunningTime="2025-11-24 13:06:42.529037304 +0000 UTC m=+6613.460096953" Nov 24 13:07:37 crc kubenswrapper[4678]: I1124 13:07:37.630299 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lbq7m/must-gather-tf29z"] Nov 24 13:07:37 crc kubenswrapper[4678]: I1124 13:07:37.633912 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lbq7m/must-gather-tf29z" Nov 24 13:07:37 crc kubenswrapper[4678]: I1124 13:07:37.642954 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lbq7m"/"kube-root-ca.crt" Nov 24 13:07:37 crc kubenswrapper[4678]: I1124 13:07:37.645405 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lbq7m"/"openshift-service-ca.crt" Nov 24 13:07:37 crc kubenswrapper[4678]: I1124 13:07:37.800985 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/928b11e7-3bbf-44d7-ad03-117642de2eca-must-gather-output\") pod \"must-gather-tf29z\" (UID: \"928b11e7-3bbf-44d7-ad03-117642de2eca\") " pod="openshift-must-gather-lbq7m/must-gather-tf29z" Nov 24 13:07:37 crc kubenswrapper[4678]: I1124 13:07:37.801109 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v7w5\" (UniqueName: \"kubernetes.io/projected/928b11e7-3bbf-44d7-ad03-117642de2eca-kube-api-access-6v7w5\") pod \"must-gather-tf29z\" (UID: \"928b11e7-3bbf-44d7-ad03-117642de2eca\") " pod="openshift-must-gather-lbq7m/must-gather-tf29z" Nov 24 13:07:37 crc kubenswrapper[4678]: I1124 13:07:37.847047 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lbq7m/must-gather-tf29z"] Nov 24 13:07:37 crc kubenswrapper[4678]: I1124 13:07:37.903180 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/928b11e7-3bbf-44d7-ad03-117642de2eca-must-gather-output\") pod \"must-gather-tf29z\" (UID: \"928b11e7-3bbf-44d7-ad03-117642de2eca\") " pod="openshift-must-gather-lbq7m/must-gather-tf29z" Nov 24 13:07:37 crc kubenswrapper[4678]: I1124 13:07:37.903377 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v7w5\" (UniqueName: \"kubernetes.io/projected/928b11e7-3bbf-44d7-ad03-117642de2eca-kube-api-access-6v7w5\") pod \"must-gather-tf29z\" (UID: \"928b11e7-3bbf-44d7-ad03-117642de2eca\") " pod="openshift-must-gather-lbq7m/must-gather-tf29z" Nov 24 13:07:37 crc kubenswrapper[4678]: I1124 13:07:37.904064 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/928b11e7-3bbf-44d7-ad03-117642de2eca-must-gather-output\") pod \"must-gather-tf29z\" (UID: \"928b11e7-3bbf-44d7-ad03-117642de2eca\") " pod="openshift-must-gather-lbq7m/must-gather-tf29z" Nov 24 13:07:37 crc kubenswrapper[4678]: I1124 13:07:37.923808 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v7w5\" (UniqueName: \"kubernetes.io/projected/928b11e7-3bbf-44d7-ad03-117642de2eca-kube-api-access-6v7w5\") pod \"must-gather-tf29z\" (UID: \"928b11e7-3bbf-44d7-ad03-117642de2eca\") " pod="openshift-must-gather-lbq7m/must-gather-tf29z" Nov 24 13:07:37 crc kubenswrapper[4678]: I1124 13:07:37.964047 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lbq7m/must-gather-tf29z" Nov 24 13:07:38 crc kubenswrapper[4678]: I1124 13:07:38.530217 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lbq7m/must-gather-tf29z"] Nov 24 13:07:39 crc kubenswrapper[4678]: I1124 13:07:39.140430 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lbq7m/must-gather-tf29z" event={"ID":"928b11e7-3bbf-44d7-ad03-117642de2eca","Type":"ContainerStarted","Data":"dac0cd70718d284d8133dc6f2f40d0978360fef0016228fb67241a453325a189"} Nov 24 13:07:44 crc kubenswrapper[4678]: I1124 13:07:44.200491 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lbq7m/must-gather-tf29z" event={"ID":"928b11e7-3bbf-44d7-ad03-117642de2eca","Type":"ContainerStarted","Data":"34305d8ecef2ca2a939d7f12e254b1bbe63b561eb40292019cdb1ae02f608997"} Nov 24 13:07:45 crc kubenswrapper[4678]: I1124 13:07:45.213756 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lbq7m/must-gather-tf29z" event={"ID":"928b11e7-3bbf-44d7-ad03-117642de2eca","Type":"ContainerStarted","Data":"87c94161442d4c2ee40a23e9d7372ef9b9a9375a31ae0039c915eecb55898894"} Nov 24 13:07:45 crc kubenswrapper[4678]: I1124 13:07:45.249376 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lbq7m/must-gather-tf29z" podStartSLOduration=2.9616513700000002 podStartE2EDuration="8.249353072s" podCreationTimestamp="2025-11-24 13:07:37 +0000 UTC" firstStartedPulling="2025-11-24 13:07:38.523643671 +0000 UTC m=+6669.454703310" lastFinishedPulling="2025-11-24 13:07:43.811345373 +0000 UTC m=+6674.742405012" observedRunningTime="2025-11-24 13:07:45.240002071 +0000 UTC m=+6676.171061710" watchObservedRunningTime="2025-11-24 13:07:45.249353072 +0000 UTC m=+6676.180412711" Nov 24 13:07:49 crc kubenswrapper[4678]: I1124 13:07:49.586142 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lbq7m/crc-debug-4bl8j"] Nov 24 13:07:49 crc kubenswrapper[4678]: I1124 13:07:49.589218 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lbq7m/crc-debug-4bl8j" Nov 24 13:07:49 crc kubenswrapper[4678]: I1124 13:07:49.591841 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lbq7m"/"default-dockercfg-z7qlf" Nov 24 13:07:49 crc kubenswrapper[4678]: I1124 13:07:49.702273 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e7ec964-82e8-441f-ac81-5d4d23b5db82-host\") pod \"crc-debug-4bl8j\" (UID: \"8e7ec964-82e8-441f-ac81-5d4d23b5db82\") " pod="openshift-must-gather-lbq7m/crc-debug-4bl8j" Nov 24 13:07:49 crc kubenswrapper[4678]: I1124 13:07:49.702534 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrsgn\" (UniqueName: \"kubernetes.io/projected/8e7ec964-82e8-441f-ac81-5d4d23b5db82-kube-api-access-qrsgn\") pod \"crc-debug-4bl8j\" (UID: \"8e7ec964-82e8-441f-ac81-5d4d23b5db82\") " pod="openshift-must-gather-lbq7m/crc-debug-4bl8j" Nov 24 13:07:49 crc kubenswrapper[4678]: I1124 13:07:49.805496 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrsgn\" (UniqueName: \"kubernetes.io/projected/8e7ec964-82e8-441f-ac81-5d4d23b5db82-kube-api-access-qrsgn\") pod \"crc-debug-4bl8j\" (UID: \"8e7ec964-82e8-441f-ac81-5d4d23b5db82\") " pod="openshift-must-gather-lbq7m/crc-debug-4bl8j" Nov 24 13:07:49 crc kubenswrapper[4678]: I1124 13:07:49.805754 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e7ec964-82e8-441f-ac81-5d4d23b5db82-host\") pod \"crc-debug-4bl8j\" (UID: \"8e7ec964-82e8-441f-ac81-5d4d23b5db82\") " pod="openshift-must-gather-lbq7m/crc-debug-4bl8j" Nov 24 13:07:49 crc kubenswrapper[4678]: I1124 13:07:49.805886 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e7ec964-82e8-441f-ac81-5d4d23b5db82-host\") pod \"crc-debug-4bl8j\" (UID: \"8e7ec964-82e8-441f-ac81-5d4d23b5db82\") " pod="openshift-must-gather-lbq7m/crc-debug-4bl8j" Nov 24 13:07:49 crc kubenswrapper[4678]: I1124 13:07:49.825045 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrsgn\" (UniqueName: \"kubernetes.io/projected/8e7ec964-82e8-441f-ac81-5d4d23b5db82-kube-api-access-qrsgn\") pod \"crc-debug-4bl8j\" (UID: \"8e7ec964-82e8-441f-ac81-5d4d23b5db82\") " pod="openshift-must-gather-lbq7m/crc-debug-4bl8j" Nov 24 13:07:49 crc kubenswrapper[4678]: I1124 13:07:49.911612 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lbq7m/crc-debug-4bl8j" Nov 24 13:07:50 crc kubenswrapper[4678]: I1124 13:07:50.266628 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lbq7m/crc-debug-4bl8j" event={"ID":"8e7ec964-82e8-441f-ac81-5d4d23b5db82","Type":"ContainerStarted","Data":"655a42f950cfb23e583a61b4448927082c2d4557f68a8d7945b3c8e7888dd141"} Nov 24 13:08:03 crc kubenswrapper[4678]: I1124 13:08:03.423864 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lbq7m/crc-debug-4bl8j" event={"ID":"8e7ec964-82e8-441f-ac81-5d4d23b5db82","Type":"ContainerStarted","Data":"94fb507d92b6906f953bf7db8273f8b2624d35ae785cba12b0c6e72f864352db"} Nov 24 13:08:03 crc kubenswrapper[4678]: I1124 13:08:03.459837 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lbq7m/crc-debug-4bl8j" podStartSLOduration=1.9149690050000001 podStartE2EDuration="14.459807997s" podCreationTimestamp="2025-11-24 13:07:49 +0000 UTC" firstStartedPulling="2025-11-24 13:07:49.978363343 +0000 UTC m=+6680.909422982" lastFinishedPulling="2025-11-24 13:08:02.523202335 +0000 UTC m=+6693.454261974" observedRunningTime="2025-11-24 13:08:03.443221323 +0000 UTC m=+6694.374281032" watchObservedRunningTime="2025-11-24 13:08:03.459807997 +0000 UTC m=+6694.390867636" Nov 24 13:09:00 crc kubenswrapper[4678]: I1124 13:09:00.158686 4678 generic.go:334] "Generic (PLEG): container finished" podID="8e7ec964-82e8-441f-ac81-5d4d23b5db82" containerID="94fb507d92b6906f953bf7db8273f8b2624d35ae785cba12b0c6e72f864352db" exitCode=0 Nov 24 13:09:00 crc kubenswrapper[4678]: I1124 13:09:00.158802 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lbq7m/crc-debug-4bl8j" event={"ID":"8e7ec964-82e8-441f-ac81-5d4d23b5db82","Type":"ContainerDied","Data":"94fb507d92b6906f953bf7db8273f8b2624d35ae785cba12b0c6e72f864352db"} Nov 24 13:09:00 crc kubenswrapper[4678]: I1124 13:09:00.297343 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:09:00 crc kubenswrapper[4678]: I1124 13:09:00.297798 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:09:01 crc kubenswrapper[4678]: I1124 13:09:01.332045 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lbq7m/crc-debug-4bl8j" Nov 24 13:09:01 crc kubenswrapper[4678]: I1124 13:09:01.385242 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lbq7m/crc-debug-4bl8j"] Nov 24 13:09:01 crc kubenswrapper[4678]: I1124 13:09:01.394501 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lbq7m/crc-debug-4bl8j"] Nov 24 13:09:01 crc kubenswrapper[4678]: I1124 13:09:01.432223 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e7ec964-82e8-441f-ac81-5d4d23b5db82-host\") pod \"8e7ec964-82e8-441f-ac81-5d4d23b5db82\" (UID: \"8e7ec964-82e8-441f-ac81-5d4d23b5db82\") " Nov 24 13:09:01 crc kubenswrapper[4678]: I1124 13:09:01.432540 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrsgn\" (UniqueName: \"kubernetes.io/projected/8e7ec964-82e8-441f-ac81-5d4d23b5db82-kube-api-access-qrsgn\") pod \"8e7ec964-82e8-441f-ac81-5d4d23b5db82\" (UID: \"8e7ec964-82e8-441f-ac81-5d4d23b5db82\") " Nov 24 13:09:01 crc kubenswrapper[4678]: I1124 13:09:01.432696 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7ec964-82e8-441f-ac81-5d4d23b5db82-host" (OuterVolumeSpecName: "host") pod "8e7ec964-82e8-441f-ac81-5d4d23b5db82" (UID: "8e7ec964-82e8-441f-ac81-5d4d23b5db82"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 13:09:01 crc kubenswrapper[4678]: I1124 13:09:01.433322 4678 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e7ec964-82e8-441f-ac81-5d4d23b5db82-host\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:01 crc kubenswrapper[4678]: I1124 13:09:01.440525 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e7ec964-82e8-441f-ac81-5d4d23b5db82-kube-api-access-qrsgn" (OuterVolumeSpecName: "kube-api-access-qrsgn") pod "8e7ec964-82e8-441f-ac81-5d4d23b5db82" (UID: "8e7ec964-82e8-441f-ac81-5d4d23b5db82"). InnerVolumeSpecName "kube-api-access-qrsgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:09:01 crc kubenswrapper[4678]: I1124 13:09:01.536704 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrsgn\" (UniqueName: \"kubernetes.io/projected/8e7ec964-82e8-441f-ac81-5d4d23b5db82-kube-api-access-qrsgn\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:01 crc kubenswrapper[4678]: I1124 13:09:01.911954 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e7ec964-82e8-441f-ac81-5d4d23b5db82" path="/var/lib/kubelet/pods/8e7ec964-82e8-441f-ac81-5d4d23b5db82/volumes" Nov 24 13:09:02 crc kubenswrapper[4678]: I1124 13:09:02.180478 4678 scope.go:117] "RemoveContainer" containerID="94fb507d92b6906f953bf7db8273f8b2624d35ae785cba12b0c6e72f864352db" Nov 24 13:09:02 crc kubenswrapper[4678]: I1124 13:09:02.180578 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lbq7m/crc-debug-4bl8j" Nov 24 13:09:02 crc kubenswrapper[4678]: I1124 13:09:02.544661 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lbq7m/crc-debug-vc9wd"] Nov 24 13:09:02 crc kubenswrapper[4678]: E1124 13:09:02.545202 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7ec964-82e8-441f-ac81-5d4d23b5db82" containerName="container-00" Nov 24 13:09:02 crc kubenswrapper[4678]: I1124 13:09:02.545215 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7ec964-82e8-441f-ac81-5d4d23b5db82" containerName="container-00" Nov 24 13:09:02 crc kubenswrapper[4678]: I1124 13:09:02.545437 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e7ec964-82e8-441f-ac81-5d4d23b5db82" containerName="container-00" Nov 24 13:09:02 crc kubenswrapper[4678]: I1124 13:09:02.546278 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lbq7m/crc-debug-vc9wd" Nov 24 13:09:02 crc kubenswrapper[4678]: I1124 13:09:02.548850 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lbq7m"/"default-dockercfg-z7qlf" Nov 24 13:09:02 crc kubenswrapper[4678]: I1124 13:09:02.668846 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48ch4\" (UniqueName: \"kubernetes.io/projected/9b60b202-0e31-4642-a759-c9687f13e579-kube-api-access-48ch4\") pod \"crc-debug-vc9wd\" (UID: \"9b60b202-0e31-4642-a759-c9687f13e579\") " pod="openshift-must-gather-lbq7m/crc-debug-vc9wd" Nov 24 13:09:02 crc kubenswrapper[4678]: I1124 13:09:02.668922 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b60b202-0e31-4642-a759-c9687f13e579-host\") pod \"crc-debug-vc9wd\" (UID: \"9b60b202-0e31-4642-a759-c9687f13e579\") " pod="openshift-must-gather-lbq7m/crc-debug-vc9wd" Nov 24 13:09:02 crc kubenswrapper[4678]: I1124 13:09:02.770652 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48ch4\" (UniqueName: \"kubernetes.io/projected/9b60b202-0e31-4642-a759-c9687f13e579-kube-api-access-48ch4\") pod \"crc-debug-vc9wd\" (UID: \"9b60b202-0e31-4642-a759-c9687f13e579\") " pod="openshift-must-gather-lbq7m/crc-debug-vc9wd" Nov 24 13:09:02 crc kubenswrapper[4678]: I1124 13:09:02.770756 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b60b202-0e31-4642-a759-c9687f13e579-host\") pod \"crc-debug-vc9wd\" (UID: \"9b60b202-0e31-4642-a759-c9687f13e579\") " pod="openshift-must-gather-lbq7m/crc-debug-vc9wd" Nov 24 13:09:02 crc kubenswrapper[4678]: I1124 13:09:02.770989 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b60b202-0e31-4642-a759-c9687f13e579-host\") pod \"crc-debug-vc9wd\" (UID: \"9b60b202-0e31-4642-a759-c9687f13e579\") " pod="openshift-must-gather-lbq7m/crc-debug-vc9wd" Nov 24 13:09:02 crc kubenswrapper[4678]: I1124 13:09:02.790197 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48ch4\" (UniqueName: \"kubernetes.io/projected/9b60b202-0e31-4642-a759-c9687f13e579-kube-api-access-48ch4\") pod \"crc-debug-vc9wd\" (UID: \"9b60b202-0e31-4642-a759-c9687f13e579\") " pod="openshift-must-gather-lbq7m/crc-debug-vc9wd" Nov 24 13:09:02 crc kubenswrapper[4678]: I1124 13:09:02.870894 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lbq7m/crc-debug-vc9wd" Nov 24 13:09:03 crc kubenswrapper[4678]: I1124 13:09:03.195827 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lbq7m/crc-debug-vc9wd" event={"ID":"9b60b202-0e31-4642-a759-c9687f13e579","Type":"ContainerStarted","Data":"0bddc80bcd0275e41c102647830fa6533b22927eff7b7947db86c6d84215b204"} Nov 24 13:09:04 crc kubenswrapper[4678]: I1124 13:09:04.209494 4678 generic.go:334] "Generic (PLEG): container finished" podID="9b60b202-0e31-4642-a759-c9687f13e579" containerID="802bd43628b4f353873773dfcdc6edcdbd9c33265a89f1b3c242ac4816d1f9ba" exitCode=0 Nov 24 13:09:04 crc kubenswrapper[4678]: I1124 13:09:04.209546 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lbq7m/crc-debug-vc9wd" event={"ID":"9b60b202-0e31-4642-a759-c9687f13e579","Type":"ContainerDied","Data":"802bd43628b4f353873773dfcdc6edcdbd9c33265a89f1b3c242ac4816d1f9ba"} Nov 24 13:09:05 crc kubenswrapper[4678]: I1124 13:09:05.366857 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lbq7m/crc-debug-vc9wd" Nov 24 13:09:05 crc kubenswrapper[4678]: I1124 13:09:05.439996 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48ch4\" (UniqueName: \"kubernetes.io/projected/9b60b202-0e31-4642-a759-c9687f13e579-kube-api-access-48ch4\") pod \"9b60b202-0e31-4642-a759-c9687f13e579\" (UID: \"9b60b202-0e31-4642-a759-c9687f13e579\") " Nov 24 13:09:05 crc kubenswrapper[4678]: I1124 13:09:05.440341 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b60b202-0e31-4642-a759-c9687f13e579-host\") pod \"9b60b202-0e31-4642-a759-c9687f13e579\" (UID: \"9b60b202-0e31-4642-a759-c9687f13e579\") " Nov 24 13:09:05 crc kubenswrapper[4678]: I1124 13:09:05.440792 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b60b202-0e31-4642-a759-c9687f13e579-host" (OuterVolumeSpecName: "host") pod "9b60b202-0e31-4642-a759-c9687f13e579" (UID: "9b60b202-0e31-4642-a759-c9687f13e579"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 13:09:05 crc kubenswrapper[4678]: I1124 13:09:05.442129 4678 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b60b202-0e31-4642-a759-c9687f13e579-host\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:05 crc kubenswrapper[4678]: I1124 13:09:05.446823 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b60b202-0e31-4642-a759-c9687f13e579-kube-api-access-48ch4" (OuterVolumeSpecName: "kube-api-access-48ch4") pod "9b60b202-0e31-4642-a759-c9687f13e579" (UID: "9b60b202-0e31-4642-a759-c9687f13e579"). InnerVolumeSpecName "kube-api-access-48ch4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:09:05 crc kubenswrapper[4678]: I1124 13:09:05.543846 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48ch4\" (UniqueName: \"kubernetes.io/projected/9b60b202-0e31-4642-a759-c9687f13e579-kube-api-access-48ch4\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:06 crc kubenswrapper[4678]: I1124 13:09:06.235231 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lbq7m/crc-debug-vc9wd" event={"ID":"9b60b202-0e31-4642-a759-c9687f13e579","Type":"ContainerDied","Data":"0bddc80bcd0275e41c102647830fa6533b22927eff7b7947db86c6d84215b204"} Nov 24 13:09:06 crc kubenswrapper[4678]: I1124 13:09:06.235304 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bddc80bcd0275e41c102647830fa6533b22927eff7b7947db86c6d84215b204" Nov 24 13:09:06 crc kubenswrapper[4678]: I1124 13:09:06.235315 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lbq7m/crc-debug-vc9wd" Nov 24 13:09:06 crc kubenswrapper[4678]: I1124 13:09:06.987458 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lbq7m/crc-debug-vc9wd"] Nov 24 13:09:07 crc kubenswrapper[4678]: I1124 13:09:07.002455 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lbq7m/crc-debug-vc9wd"] Nov 24 13:09:07 crc kubenswrapper[4678]: I1124 13:09:07.921716 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b60b202-0e31-4642-a759-c9687f13e579" path="/var/lib/kubelet/pods/9b60b202-0e31-4642-a759-c9687f13e579/volumes" Nov 24 13:09:08 crc kubenswrapper[4678]: I1124 13:09:08.158019 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lbq7m/crc-debug-nxxh7"] Nov 24 13:09:08 crc kubenswrapper[4678]: E1124 13:09:08.159140 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b60b202-0e31-4642-a759-c9687f13e579" containerName="container-00" Nov 24 13:09:08 crc kubenswrapper[4678]: I1124 13:09:08.159157 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b60b202-0e31-4642-a759-c9687f13e579" containerName="container-00" Nov 24 13:09:08 crc kubenswrapper[4678]: I1124 13:09:08.159504 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b60b202-0e31-4642-a759-c9687f13e579" containerName="container-00" Nov 24 13:09:08 crc kubenswrapper[4678]: I1124 13:09:08.160631 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lbq7m/crc-debug-nxxh7" Nov 24 13:09:08 crc kubenswrapper[4678]: I1124 13:09:08.162912 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lbq7m"/"default-dockercfg-z7qlf" Nov 24 13:09:08 crc kubenswrapper[4678]: I1124 13:09:08.218193 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ad77bbb-ec91-437d-9cb9-a8c29c299a2f-host\") pod \"crc-debug-nxxh7\" (UID: \"7ad77bbb-ec91-437d-9cb9-a8c29c299a2f\") " pod="openshift-must-gather-lbq7m/crc-debug-nxxh7" Nov 24 13:09:08 crc kubenswrapper[4678]: I1124 13:09:08.218461 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x9h8\" (UniqueName: \"kubernetes.io/projected/7ad77bbb-ec91-437d-9cb9-a8c29c299a2f-kube-api-access-2x9h8\") pod \"crc-debug-nxxh7\" (UID: \"7ad77bbb-ec91-437d-9cb9-a8c29c299a2f\") " pod="openshift-must-gather-lbq7m/crc-debug-nxxh7" Nov 24 13:09:08 crc kubenswrapper[4678]: I1124 13:09:08.321198 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ad77bbb-ec91-437d-9cb9-a8c29c299a2f-host\") pod \"crc-debug-nxxh7\" (UID: \"7ad77bbb-ec91-437d-9cb9-a8c29c299a2f\") " pod="openshift-must-gather-lbq7m/crc-debug-nxxh7" Nov 24 13:09:08 crc kubenswrapper[4678]: I1124 13:09:08.321343 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ad77bbb-ec91-437d-9cb9-a8c29c299a2f-host\") pod \"crc-debug-nxxh7\" (UID: \"7ad77bbb-ec91-437d-9cb9-a8c29c299a2f\") " pod="openshift-must-gather-lbq7m/crc-debug-nxxh7" Nov 24 13:09:08 crc kubenswrapper[4678]: I1124 13:09:08.321371 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x9h8\" (UniqueName: \"kubernetes.io/projected/7ad77bbb-ec91-437d-9cb9-a8c29c299a2f-kube-api-access-2x9h8\") pod \"crc-debug-nxxh7\" (UID: \"7ad77bbb-ec91-437d-9cb9-a8c29c299a2f\") " pod="openshift-must-gather-lbq7m/crc-debug-nxxh7" Nov 24 13:09:08 crc kubenswrapper[4678]: I1124 13:09:08.341955 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x9h8\" (UniqueName: \"kubernetes.io/projected/7ad77bbb-ec91-437d-9cb9-a8c29c299a2f-kube-api-access-2x9h8\") pod \"crc-debug-nxxh7\" (UID: \"7ad77bbb-ec91-437d-9cb9-a8c29c299a2f\") " pod="openshift-must-gather-lbq7m/crc-debug-nxxh7" Nov 24 13:09:08 crc kubenswrapper[4678]: I1124 13:09:08.483703 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lbq7m/crc-debug-nxxh7" Nov 24 13:09:09 crc kubenswrapper[4678]: I1124 13:09:09.279776 4678 generic.go:334] "Generic (PLEG): container finished" podID="7ad77bbb-ec91-437d-9cb9-a8c29c299a2f" containerID="2f760202114dd5c4e7e1f2665a457ce6bd64f78b51fb1841ea9e28a8eb0e67ec" exitCode=0 Nov 24 13:09:09 crc kubenswrapper[4678]: I1124 13:09:09.279862 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lbq7m/crc-debug-nxxh7" event={"ID":"7ad77bbb-ec91-437d-9cb9-a8c29c299a2f","Type":"ContainerDied","Data":"2f760202114dd5c4e7e1f2665a457ce6bd64f78b51fb1841ea9e28a8eb0e67ec"} Nov 24 13:09:09 crc kubenswrapper[4678]: I1124 13:09:09.280218 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lbq7m/crc-debug-nxxh7" event={"ID":"7ad77bbb-ec91-437d-9cb9-a8c29c299a2f","Type":"ContainerStarted","Data":"8b1a6f1e0070b21dc9cfb46a7a992b02ce52849f41edfd96e0c4666594084bfd"} Nov 24 13:09:09 crc kubenswrapper[4678]: I1124 13:09:09.361117 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lbq7m/crc-debug-nxxh7"] Nov 24 13:09:09 crc kubenswrapper[4678]: I1124 13:09:09.372424 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lbq7m/crc-debug-nxxh7"] Nov 24 13:09:10 crc kubenswrapper[4678]: I1124 13:09:10.420755 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lbq7m/crc-debug-nxxh7" Nov 24 13:09:10 crc kubenswrapper[4678]: I1124 13:09:10.493943 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ad77bbb-ec91-437d-9cb9-a8c29c299a2f-host\") pod \"7ad77bbb-ec91-437d-9cb9-a8c29c299a2f\" (UID: \"7ad77bbb-ec91-437d-9cb9-a8c29c299a2f\") " Nov 24 13:09:10 crc kubenswrapper[4678]: I1124 13:09:10.494116 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ad77bbb-ec91-437d-9cb9-a8c29c299a2f-host" (OuterVolumeSpecName: "host") pod "7ad77bbb-ec91-437d-9cb9-a8c29c299a2f" (UID: "7ad77bbb-ec91-437d-9cb9-a8c29c299a2f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 13:09:10 crc kubenswrapper[4678]: I1124 13:09:10.494395 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x9h8\" (UniqueName: \"kubernetes.io/projected/7ad77bbb-ec91-437d-9cb9-a8c29c299a2f-kube-api-access-2x9h8\") pod \"7ad77bbb-ec91-437d-9cb9-a8c29c299a2f\" (UID: \"7ad77bbb-ec91-437d-9cb9-a8c29c299a2f\") " Nov 24 13:09:10 crc kubenswrapper[4678]: I1124 13:09:10.495293 4678 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ad77bbb-ec91-437d-9cb9-a8c29c299a2f-host\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:10 crc kubenswrapper[4678]: I1124 13:09:10.501928 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ad77bbb-ec91-437d-9cb9-a8c29c299a2f-kube-api-access-2x9h8" (OuterVolumeSpecName: "kube-api-access-2x9h8") pod "7ad77bbb-ec91-437d-9cb9-a8c29c299a2f" (UID: "7ad77bbb-ec91-437d-9cb9-a8c29c299a2f"). InnerVolumeSpecName "kube-api-access-2x9h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:09:10 crc kubenswrapper[4678]: I1124 13:09:10.598988 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x9h8\" (UniqueName: \"kubernetes.io/projected/7ad77bbb-ec91-437d-9cb9-a8c29c299a2f-kube-api-access-2x9h8\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:11 crc kubenswrapper[4678]: I1124 13:09:11.307344 4678 scope.go:117] "RemoveContainer" containerID="2f760202114dd5c4e7e1f2665a457ce6bd64f78b51fb1841ea9e28a8eb0e67ec" Nov 24 13:09:11 crc kubenswrapper[4678]: I1124 13:09:11.307579 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lbq7m/crc-debug-nxxh7" Nov 24 13:09:11 crc kubenswrapper[4678]: I1124 13:09:11.921091 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ad77bbb-ec91-437d-9cb9-a8c29c299a2f" path="/var/lib/kubelet/pods/7ad77bbb-ec91-437d-9cb9-a8c29c299a2f/volumes" Nov 24 13:09:30 crc kubenswrapper[4678]: I1124 13:09:30.297024 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:09:30 crc kubenswrapper[4678]: I1124 13:09:30.298726 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:09:37 crc kubenswrapper[4678]: I1124 13:09:37.985135 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5c096be8-cc8c-4b25-9a96-b64c3566f1a0/aodh-api/0.log" Nov 24 13:09:38 crc kubenswrapper[4678]: I1124 13:09:38.154455 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5c096be8-cc8c-4b25-9a96-b64c3566f1a0/aodh-evaluator/0.log" Nov 24 13:09:38 crc kubenswrapper[4678]: I1124 13:09:38.208260 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5c096be8-cc8c-4b25-9a96-b64c3566f1a0/aodh-listener/0.log" Nov 24 13:09:38 crc kubenswrapper[4678]: I1124 13:09:38.246984 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5c096be8-cc8c-4b25-9a96-b64c3566f1a0/aodh-notifier/0.log" Nov 24 13:09:38 crc kubenswrapper[4678]: I1124 13:09:38.410095 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-75f757b7cd-s6z6f_2d8cb226-d8a1-44b9-8656-e04def590cdc/barbican-api/0.log" Nov 24 13:09:38 crc kubenswrapper[4678]: I1124 13:09:38.476857 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-75f757b7cd-s6z6f_2d8cb226-d8a1-44b9-8656-e04def590cdc/barbican-api-log/0.log" Nov 24 13:09:38 crc kubenswrapper[4678]: I1124 13:09:38.756688 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6fcdf46c94-52rq9_44457729-ea53-4b02-bb60-00cd81170d9b/barbican-keystone-listener/0.log" Nov 24 13:09:38 crc kubenswrapper[4678]: I1124 13:09:38.955816 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6fcdf46c94-52rq9_44457729-ea53-4b02-bb60-00cd81170d9b/barbican-keystone-listener-log/0.log" Nov 24 13:09:38 crc kubenswrapper[4678]: I1124 13:09:38.981462 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-586bfddf5f-xk2jd_ea290c11-6cf3-425a-a5be-749d3563adaa/barbican-worker/0.log" Nov 24 13:09:39 crc kubenswrapper[4678]: I1124 13:09:39.052021 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-586bfddf5f-xk2jd_ea290c11-6cf3-425a-a5be-749d3563adaa/barbican-worker-log/0.log" Nov 24 13:09:39 crc kubenswrapper[4678]: I1124 13:09:39.249017 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-9dnzt_d84f1a22-1d8b-4507-bc2d-d7f1ebe3483c/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:39 crc kubenswrapper[4678]: I1124 13:09:39.427400 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6ee2246c-b989-4aa6-9592-c84f9e8252e1/ceilometer-central-agent/0.log" Nov 24 13:09:39 crc kubenswrapper[4678]: I1124 13:09:39.494637 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6ee2246c-b989-4aa6-9592-c84f9e8252e1/ceilometer-notification-agent/0.log" Nov 24 13:09:39 crc kubenswrapper[4678]: I1124 13:09:39.714148 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6ee2246c-b989-4aa6-9592-c84f9e8252e1/sg-core/0.log" Nov 24 13:09:39 crc kubenswrapper[4678]: I1124 13:09:39.748386 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6ee2246c-b989-4aa6-9592-c84f9e8252e1/proxy-httpd/0.log" Nov 24 13:09:39 crc kubenswrapper[4678]: I1124 13:09:39.961068 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_32725d0f-f32f-4ec4-9982-ebae7a555802/cinder-api-log/0.log" Nov 24 13:09:40 crc kubenswrapper[4678]: I1124 13:09:40.078661 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_32725d0f-f32f-4ec4-9982-ebae7a555802/cinder-api/0.log" Nov 24 13:09:40 crc kubenswrapper[4678]: I1124 13:09:40.193209 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1ebf38af-2df6-49a3-8a00-37ff5996c82e/cinder-scheduler/0.log" Nov 24 13:09:40 crc kubenswrapper[4678]: I1124 13:09:40.297519 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1ebf38af-2df6-49a3-8a00-37ff5996c82e/probe/0.log" Nov 24 13:09:40 crc kubenswrapper[4678]: I1124 13:09:40.394387 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8jh7p_2f93cb91-ae3f-42ef-844b-70d428271ee1/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:40 crc kubenswrapper[4678]: I1124 13:09:40.558972 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-hqbc5_a55cd4bf-43a3-4ba5-a44e-6531b7e6740a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:40 crc kubenswrapper[4678]: I1124 13:09:40.645601 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-lsbwn_79679ecc-800f-4387-8516-8fb01f65610b/init/0.log" Nov 24 13:09:40 crc kubenswrapper[4678]: I1124 13:09:40.931253 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-54bsj_e3962a1c-012b-4c17-85d3-bf3f2f5b6147/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:40 crc kubenswrapper[4678]: I1124 13:09:40.964345 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-lsbwn_79679ecc-800f-4387-8516-8fb01f65610b/init/0.log" Nov 24 13:09:41 crc kubenswrapper[4678]: I1124 13:09:41.043061 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-lsbwn_79679ecc-800f-4387-8516-8fb01f65610b/dnsmasq-dns/0.log" Nov 24 13:09:41 crc kubenswrapper[4678]: I1124 13:09:41.254040 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_59b9920c-98be-4c2e-ba15-63d67e7f8a50/glance-httpd/0.log" Nov 24 13:09:41 crc kubenswrapper[4678]: I1124 13:09:41.308874 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_59b9920c-98be-4c2e-ba15-63d67e7f8a50/glance-log/0.log" Nov 24 13:09:41 crc kubenswrapper[4678]: I1124 13:09:41.447494 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1f7848f6-dff5-403f-b2bd-22d8a1e43b0c/glance-log/0.log" Nov 24 13:09:41 crc kubenswrapper[4678]: I1124 13:09:41.554261 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1f7848f6-dff5-403f-b2bd-22d8a1e43b0c/glance-httpd/0.log" Nov 24 13:09:42 crc kubenswrapper[4678]: I1124 13:09:42.061813 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-666c8594cc-27c89_6b75b7f8-46a4-423a-bd0f-910b078e32ed/heat-engine/0.log" Nov 24 13:09:42 crc kubenswrapper[4678]: I1124 13:09:42.388797 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-frcbj_35879489-c790-4b02-abb6-da023eef4eac/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:42 crc kubenswrapper[4678]: I1124 13:09:42.631517 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-qrmsq_8188cfcf-b26c-4761-886d-786112eb4539/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:43 crc kubenswrapper[4678]: I1124 13:09:43.060054 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-84b6779dd-5vgzv_97d6d2c5-9baf-480a-b82b-d283121c72d3/heat-cfnapi/0.log" Nov 24 13:09:43 crc kubenswrapper[4678]: I1124 13:09:43.416319 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5cc4ff9998-ks46b_f258680a-b33d-4eec-8fce-3f6f5d3a00ee/heat-api/0.log" Nov 24 13:09:43 crc kubenswrapper[4678]: I1124 13:09:43.476948 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29399761-g59n6_25a49349-3ad1-4efb-a5b3-851d707c47ac/keystone-cron/0.log" Nov 24 13:09:43 crc kubenswrapper[4678]: I1124 13:09:43.605007 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29399821-b756t_907db682-c7c3-459d-8030-295f0d16951b/keystone-cron/0.log" Nov 24 13:09:43 crc kubenswrapper[4678]: I1124 13:09:43.709208 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_3e0031b4-15dc-4530-89ae-ffec2f45e9f7/kube-state-metrics/0.log" Nov 24 13:09:43 crc kubenswrapper[4678]: I1124 13:09:43.760605 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7cb75676bc-dmjv6_eed6b8b9-3443-42af-ab2e-b8695cf8b1e8/keystone-api/0.log" Nov 24 13:09:43 crc kubenswrapper[4678]: I1124 13:09:43.989309 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-87nh4_3c6b4924-9f1f-4528-bb08-480676547ff8/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:44 crc kubenswrapper[4678]: I1124 13:09:44.027369 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-sfn2n_de1e2b8c-1820-4954-94b2-c7c021fba2ee/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:44 crc kubenswrapper[4678]: I1124 13:09:44.295332 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_a60ff952-7be9-480a-be2b-ffbe9bddd9ca/mysqld-exporter/0.log" Nov 24 13:09:44 crc kubenswrapper[4678]: I1124 13:09:44.675267 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-7k5dj_06c13190-90f2-4686-8ec5-d1c8c8ae6928/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:44 crc kubenswrapper[4678]: I1124 13:09:44.683488 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-697d9cc569-8n57v_76238d6c-0c33-441f-8da3-1b4d23b519d8/neutron-api/0.log" Nov 24 13:09:44 crc kubenswrapper[4678]: I1124 13:09:44.820015 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-697d9cc569-8n57v_76238d6c-0c33-441f-8da3-1b4d23b519d8/neutron-httpd/0.log" Nov 24 13:09:45 crc kubenswrapper[4678]: I1124 13:09:45.381813 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_36187042-d7c3-48fd-9bba-ac9967630015/nova-cell0-conductor-conductor/0.log" Nov 24 13:09:45 crc kubenswrapper[4678]: I1124 13:09:45.692838 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_19cdd516-8b52-4b72-936c-37c619cda4a6/nova-api-log/0.log" Nov 24 13:09:45 crc kubenswrapper[4678]: I1124 13:09:45.745900 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4a85b96e-7419-42cd-80c4-e1d4ef411dee/nova-cell1-conductor-conductor/0.log" Nov 24 13:09:46 crc kubenswrapper[4678]: I1124 13:09:46.079663 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-p2rvt_23808fd9-feff-4e7c-835e-dd9658816050/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:46 crc kubenswrapper[4678]: I1124 13:09:46.092716 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_86fd0d08-2581-4fda-a843-7ed2b3b7f756/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 13:09:46 crc kubenswrapper[4678]: I1124 13:09:46.265139 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_19cdd516-8b52-4b72-936c-37c619cda4a6/nova-api-api/0.log" Nov 24 13:09:46 crc kubenswrapper[4678]: I1124 13:09:46.518832 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_34efe18e-641b-4f0c-a39b-94693f74d2bb/nova-metadata-log/0.log" Nov 24 13:09:46 crc kubenswrapper[4678]: I1124 13:09:46.707857 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_2cc52ccc-2152-40c4-a3ac-3d029a1f3e60/nova-scheduler-scheduler/0.log" Nov 24 13:09:46 crc kubenswrapper[4678]: I1124 13:09:46.883843 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25fc6cbb-a91d-4c54-9736-5684da015680/mysql-bootstrap/0.log" Nov 24 13:09:47 crc kubenswrapper[4678]: I1124 13:09:47.069314 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25fc6cbb-a91d-4c54-9736-5684da015680/galera/0.log" Nov 24 13:09:47 crc kubenswrapper[4678]: I1124 13:09:47.084053 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25fc6cbb-a91d-4c54-9736-5684da015680/mysql-bootstrap/0.log" Nov 24 13:09:47 crc kubenswrapper[4678]: I1124 13:09:47.270227 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8f4675f4-74be-4f56-a3a6-d7e6aea34614/mysql-bootstrap/0.log" Nov 24 13:09:47 crc kubenswrapper[4678]: I1124 13:09:47.544546 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8f4675f4-74be-4f56-a3a6-d7e6aea34614/galera/0.log" Nov 24 13:09:47 crc kubenswrapper[4678]: I1124 13:09:47.567828 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8f4675f4-74be-4f56-a3a6-d7e6aea34614/mysql-bootstrap/0.log" Nov 24 13:09:47 crc kubenswrapper[4678]: I1124 13:09:47.729279 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b6ac91fd-dfdb-48aa-94b9-588f6a6a7ce7/openstackclient/0.log" Nov 24 13:09:47 crc kubenswrapper[4678]: I1124 13:09:47.836213 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-blf4t_de344c51-a739-44dc-b0a2-914839d40a8b/ovn-controller/0.log" Nov 24 13:09:48 crc kubenswrapper[4678]: I1124 13:09:48.076205 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vbxmj_bfcb7171-feaa-413b-a0af-e4adf0bef864/openstack-network-exporter/0.log" Nov 24 13:09:48 crc kubenswrapper[4678]: I1124 13:09:48.311108 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xnsx2_d9a9841c-3831-4419-a66f-0c84a801082f/ovsdb-server-init/0.log" Nov 24 13:09:48 crc kubenswrapper[4678]: I1124 13:09:48.575073 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xnsx2_d9a9841c-3831-4419-a66f-0c84a801082f/ovsdb-server/0.log" Nov 24 13:09:48 crc kubenswrapper[4678]: I1124 13:09:48.594991 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xnsx2_d9a9841c-3831-4419-a66f-0c84a801082f/ovsdb-server-init/0.log" Nov 24 13:09:48 crc kubenswrapper[4678]: I1124 13:09:48.605343 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xnsx2_d9a9841c-3831-4419-a66f-0c84a801082f/ovs-vswitchd/0.log" Nov 24 13:09:48 crc kubenswrapper[4678]: I1124 13:09:48.899860 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-tprdh_85dc2c98-9e06-457d-85be-821a21514762/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:49 crc kubenswrapper[4678]: I1124 13:09:49.104771 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_dc293cb3-7b1d-4102-b9c3-65e58516ec79/openstack-network-exporter/0.log" Nov 24 13:09:49 crc kubenswrapper[4678]: I1124 13:09:49.196239 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_dc293cb3-7b1d-4102-b9c3-65e58516ec79/ovn-northd/0.log" Nov 24 13:09:49 crc kubenswrapper[4678]: I1124 13:09:49.377153 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_78965549-7245-45c4-a523-132073321076/openstack-network-exporter/0.log" Nov 24 13:09:49 crc kubenswrapper[4678]: I1124 13:09:49.460748 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_78965549-7245-45c4-a523-132073321076/ovsdbserver-nb/0.log" Nov 24 13:09:49 crc kubenswrapper[4678]: I1124 13:09:49.634946 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9cbb9f62-41c9-4c77-b572-e14fb76a8b45/openstack-network-exporter/0.log" Nov 24 13:09:49 crc kubenswrapper[4678]: I1124 13:09:49.738288 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9cbb9f62-41c9-4c77-b572-e14fb76a8b45/ovsdbserver-sb/0.log" Nov 24 13:09:49 crc kubenswrapper[4678]: I1124 13:09:49.757550 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_34efe18e-641b-4f0c-a39b-94693f74d2bb/nova-metadata-metadata/0.log" Nov 24 13:09:50 crc kubenswrapper[4678]: I1124 13:09:50.093189 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7f4c4bbb96-gnmrh_18ccf264-50f3-476e-9640-1a4f3d23044f/placement-api/0.log" Nov 24 13:09:50 crc kubenswrapper[4678]: I1124 13:09:50.177989 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8b2f0329-4af5-4426-a61e-2b3b1deff8a7/init-config-reloader/0.log" Nov 24 13:09:50 crc kubenswrapper[4678]: I1124 13:09:50.270038 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7f4c4bbb96-gnmrh_18ccf264-50f3-476e-9640-1a4f3d23044f/placement-log/0.log" Nov 24 13:09:50 crc kubenswrapper[4678]: I1124 13:09:50.313231 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8b2f0329-4af5-4426-a61e-2b3b1deff8a7/init-config-reloader/0.log" Nov 24 13:09:50 crc kubenswrapper[4678]: I1124 13:09:50.373183 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8b2f0329-4af5-4426-a61e-2b3b1deff8a7/config-reloader/0.log" Nov 24 13:09:50 crc kubenswrapper[4678]: I1124 13:09:50.432604 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8b2f0329-4af5-4426-a61e-2b3b1deff8a7/prometheus/0.log" Nov 24 13:09:50 crc kubenswrapper[4678]: I1124 13:09:50.568657 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8b2f0329-4af5-4426-a61e-2b3b1deff8a7/thanos-sidecar/0.log" Nov 24 13:09:50 crc kubenswrapper[4678]: I1124 13:09:50.640099 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2b3ff76d-79e0-4f90-8b4a-7763c3ca8167/setup-container/0.log" Nov 24 13:09:50 crc kubenswrapper[4678]: I1124 13:09:50.842312 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2b3ff76d-79e0-4f90-8b4a-7763c3ca8167/setup-container/0.log" Nov 24 13:09:50 crc kubenswrapper[4678]: I1124 13:09:50.899538 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_87e447ce-94b3-4e59-a513-fec289651bd6/setup-container/0.log" Nov 24 13:09:50 crc kubenswrapper[4678]: I1124 13:09:50.906058 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2b3ff76d-79e0-4f90-8b4a-7763c3ca8167/rabbitmq/0.log" Nov 24 13:09:51 crc kubenswrapper[4678]: I1124 13:09:51.144192 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_87e447ce-94b3-4e59-a513-fec289651bd6/setup-container/0.log" Nov 24 13:09:51 crc kubenswrapper[4678]: I1124 13:09:51.171105 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_87e447ce-94b3-4e59-a513-fec289651bd6/rabbitmq/0.log" Nov 24 13:09:51 crc kubenswrapper[4678]: I1124 13:09:51.196979 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-lhpl7_5826f176-5b24-4f37-93db-b8ab73e42443/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:51 crc kubenswrapper[4678]: I1124 13:09:51.378418 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-mz67m_85b55648-6ef0-4b5f-aa62-c0cadcc6d66d/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:51 crc kubenswrapper[4678]: I1124 13:09:51.466042 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-qv94t_477ad805-b800-4cb5-b0ae-9fb064cc09ee/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:51 crc kubenswrapper[4678]: I1124 13:09:51.670578 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-vxmd2_49c5d423-1095-46d6-9054-a1957402fd7e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:51 crc kubenswrapper[4678]: I1124 13:09:51.743435 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-mwg84_dd152515-28eb-453c-a841-34dc603a3c3d/ssh-known-hosts-edpm-deployment/0.log" Nov 24 13:09:52 crc kubenswrapper[4678]: I1124 13:09:52.017132 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-74f7b98495-b5gj8_95ada9de-2ac2-4ea9-9d4d-0ef4293da59f/proxy-server/0.log" Nov 24 13:09:52 crc kubenswrapper[4678]: I1124 13:09:52.222765 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-74f7b98495-b5gj8_95ada9de-2ac2-4ea9-9d4d-0ef4293da59f/proxy-httpd/0.log" Nov 24 13:09:52 crc kubenswrapper[4678]: I1124 13:09:52.289976 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-4wb58_1d9fedfc-2539-44c3-9124-7b5c96af23da/swift-ring-rebalance/0.log" Nov 24 13:09:52 crc kubenswrapper[4678]: I1124 13:09:52.393327 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1a7a4a62-9baa-4df8-ba83-688dc6817249/account-auditor/0.log" Nov 24 13:09:52 crc kubenswrapper[4678]: I1124 13:09:52.480380 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1a7a4a62-9baa-4df8-ba83-688dc6817249/account-reaper/0.log" Nov 24 13:09:52 crc kubenswrapper[4678]: I1124 13:09:52.569593 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1a7a4a62-9baa-4df8-ba83-688dc6817249/account-replicator/0.log" Nov 24 13:09:52 crc kubenswrapper[4678]: I1124 13:09:52.580435 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1a7a4a62-9baa-4df8-ba83-688dc6817249/account-server/0.log" Nov 24 13:09:52 crc kubenswrapper[4678]: I1124 13:09:52.673491 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1a7a4a62-9baa-4df8-ba83-688dc6817249/container-auditor/0.log" Nov 24 13:09:52 crc kubenswrapper[4678]: I1124 13:09:52.774974 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1a7a4a62-9baa-4df8-ba83-688dc6817249/container-replicator/0.log" Nov 24 13:09:52 crc kubenswrapper[4678]: I1124 13:09:52.814067 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1a7a4a62-9baa-4df8-ba83-688dc6817249/container-server/0.log" Nov 24 13:09:52 crc kubenswrapper[4678]: I1124 13:09:52.824958 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1a7a4a62-9baa-4df8-ba83-688dc6817249/container-updater/0.log" Nov 24 13:09:52 crc kubenswrapper[4678]: I1124 13:09:52.971982 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1a7a4a62-9baa-4df8-ba83-688dc6817249/object-auditor/0.log" Nov 24 13:09:53 crc kubenswrapper[4678]: I1124 13:09:53.035760 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1a7a4a62-9baa-4df8-ba83-688dc6817249/object-server/0.log" Nov 24 13:09:53 crc kubenswrapper[4678]: I1124 13:09:53.097229 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1a7a4a62-9baa-4df8-ba83-688dc6817249/object-expirer/0.log" Nov 24 13:09:53 crc kubenswrapper[4678]: I1124 13:09:53.137330 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1a7a4a62-9baa-4df8-ba83-688dc6817249/object-replicator/0.log" Nov 24 13:09:53 crc kubenswrapper[4678]: I1124 13:09:53.262453 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1a7a4a62-9baa-4df8-ba83-688dc6817249/object-updater/0.log" Nov 24 13:09:53 crc kubenswrapper[4678]: I1124 13:09:53.266127 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1a7a4a62-9baa-4df8-ba83-688dc6817249/rsync/0.log" Nov 24 13:09:53 crc kubenswrapper[4678]: I1124 13:09:53.328356 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1a7a4a62-9baa-4df8-ba83-688dc6817249/swift-recon-cron/0.log" Nov 24 13:09:53 crc kubenswrapper[4678]: I1124 13:09:53.600245 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-7s7bn_8106bb6e-2abf-42db-8e44-80656738e917/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:53 crc kubenswrapper[4678]: I1124 13:09:53.691604 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-x8q6t_178a6623-f5e9-4ead-a910-e4ca618af68c/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:53 crc kubenswrapper[4678]: I1124 13:09:53.914093 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_3915bea2-2199-409f-b6f6-842f0b991f93/test-operator-logs-container/0.log" Nov 24 13:09:54 crc kubenswrapper[4678]: I1124 13:09:54.111912 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-qvz5j_2e8e9e91-5959-4640-8cea-d21f383c0c54/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:09:54 crc kubenswrapper[4678]: I1124 13:09:54.988624 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_fa52a8b5-88fb-4f22-b067-edbdcee003ea/tempest-tests-tempest-tests-runner/0.log" Nov 24 13:10:00 crc kubenswrapper[4678]: I1124 13:10:00.296372 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:10:00 crc kubenswrapper[4678]: I1124 13:10:00.296981 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:10:00 crc kubenswrapper[4678]: I1124 13:10:00.297040 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 13:10:00 crc kubenswrapper[4678]: I1124 13:10:00.298104 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d49db0a3acb427f624097f22598b79529846e1454fe47b119a335df94a836cf"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:10:00 crc kubenswrapper[4678]: I1124 13:10:00.298160 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://1d49db0a3acb427f624097f22598b79529846e1454fe47b119a335df94a836cf" gracePeriod=600 Nov 24 13:10:00 crc kubenswrapper[4678]: I1124 13:10:00.914356 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="1d49db0a3acb427f624097f22598b79529846e1454fe47b119a335df94a836cf" exitCode=0 Nov 24 13:10:00 crc kubenswrapper[4678]: I1124 13:10:00.914870 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"1d49db0a3acb427f624097f22598b79529846e1454fe47b119a335df94a836cf"} Nov 24 13:10:00 crc kubenswrapper[4678]: I1124 13:10:00.914956 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6"} Nov 24 13:10:00 crc kubenswrapper[4678]: I1124 13:10:00.914981 4678 scope.go:117] "RemoveContainer" containerID="4b91b8ed75c9d3bc266e50a051e39b492dfabfe2c6cba4728223fc43cfae4497" Nov 24 13:10:07 crc kubenswrapper[4678]: I1124 13:10:07.397044 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_559dccdf-14d1-43da-9acf-ddc0ae3fef0a/memcached/0.log" Nov 24 13:10:22 crc kubenswrapper[4678]: I1124 13:10:22.202106 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb_37b5d808-3ae5-47a2-95d5-fb22a1e073de/util/0.log" Nov 24 13:10:22 crc kubenswrapper[4678]: I1124 13:10:22.409895 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb_37b5d808-3ae5-47a2-95d5-fb22a1e073de/util/0.log" Nov 24 13:10:22 crc kubenswrapper[4678]: I1124 13:10:22.413043 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb_37b5d808-3ae5-47a2-95d5-fb22a1e073de/pull/0.log" Nov 24 13:10:22 crc kubenswrapper[4678]: I1124 13:10:22.417726 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb_37b5d808-3ae5-47a2-95d5-fb22a1e073de/pull/0.log" Nov 24 13:10:22 crc kubenswrapper[4678]: I1124 13:10:22.601609 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb_37b5d808-3ae5-47a2-95d5-fb22a1e073de/pull/0.log" Nov 24 13:10:22 crc kubenswrapper[4678]: I1124 13:10:22.618173 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb_37b5d808-3ae5-47a2-95d5-fb22a1e073de/util/0.log" Nov 24 13:10:22 crc kubenswrapper[4678]: I1124 13:10:22.641820 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c50002800aeee0c9ba0d315337944b3c0e4420aac051b8aeb69d96fc4r9tzb_37b5d808-3ae5-47a2-95d5-fb22a1e073de/extract/0.log" Nov 24 13:10:22 crc kubenswrapper[4678]: I1124 13:10:22.811358 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-xlx8j_1d845025-efc3-47c5-b640-59eeafc744a2/kube-rbac-proxy/0.log" Nov 24 13:10:22 crc kubenswrapper[4678]: I1124 13:10:22.873756 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-nxdjc_7f7a3294-7af7-44cb-95b7-3214cda4de48/kube-rbac-proxy/0.log" Nov 24 13:10:22 crc kubenswrapper[4678]: I1124 13:10:22.880713 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-xlx8j_1d845025-efc3-47c5-b640-59eeafc744a2/manager/0.log" Nov 24 13:10:23 crc kubenswrapper[4678]: I1124 13:10:23.047451 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-nxdjc_7f7a3294-7af7-44cb-95b7-3214cda4de48/manager/0.log" Nov 24 13:10:23 crc kubenswrapper[4678]: I1124 13:10:23.095515 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-gmrd8_e50daf7a-089a-48d0-883f-5db082bb6908/kube-rbac-proxy/0.log" Nov 24 13:10:23 crc kubenswrapper[4678]: I1124 13:10:23.102712 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-gmrd8_e50daf7a-089a-48d0-883f-5db082bb6908/manager/0.log" Nov 24 13:10:23 crc kubenswrapper[4678]: I1124 13:10:23.318915 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-cxm7x_276b61c4-dec2-4f5e-a5bd-ac814c7d0fc5/kube-rbac-proxy/0.log" Nov 24 13:10:23 crc kubenswrapper[4678]: I1124 13:10:23.403270 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-cxm7x_276b61c4-dec2-4f5e-a5bd-ac814c7d0fc5/manager/0.log" Nov 24 13:10:23 crc kubenswrapper[4678]: I1124 13:10:23.480744 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-jjbs2_f98bea89-6852-42c9-a69b-9867fe021eb8/kube-rbac-proxy/0.log" Nov 24 13:10:23 crc kubenswrapper[4678]: I1124 13:10:23.588748 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-jjbs2_f98bea89-6852-42c9-a69b-9867fe021eb8/manager/0.log" Nov 24 13:10:23 crc kubenswrapper[4678]: I1124 13:10:23.629787 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-jk9k4_e9db91a3-68e2-4500-ab6a-d1055c6e6dde/kube-rbac-proxy/0.log" Nov 24 13:10:23 crc kubenswrapper[4678]: I1124 13:10:23.690728 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-jk9k4_e9db91a3-68e2-4500-ab6a-d1055c6e6dde/manager/0.log" Nov 24 13:10:23 crc kubenswrapper[4678]: I1124 13:10:23.810658 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-r4sjj_eecabfcc-62de-4512-b5e8-1685d7fd1144/kube-rbac-proxy/0.log" Nov 24 13:10:23 crc kubenswrapper[4678]: I1124 13:10:23.976273 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-q77cx_d2fab4cb-dff4-439e-a97b-b35b8a2203c6/kube-rbac-proxy/0.log" Nov 24 13:10:24 crc kubenswrapper[4678]: I1124 13:10:24.026512 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-r4sjj_eecabfcc-62de-4512-b5e8-1685d7fd1144/manager/0.log" Nov 24 13:10:24 crc kubenswrapper[4678]: I1124 13:10:24.061508 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-q77cx_d2fab4cb-dff4-439e-a97b-b35b8a2203c6/manager/0.log" Nov 24 13:10:24 crc kubenswrapper[4678]: I1124 13:10:24.200294 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-2h8fr_edbe0de9-67d0-49cc-a867-3483035e3c51/kube-rbac-proxy/0.log" Nov 24 13:10:24 crc kubenswrapper[4678]: I1124 13:10:24.302343 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-2h8fr_edbe0de9-67d0-49cc-a867-3483035e3c51/manager/0.log" Nov 24 13:10:24 crc kubenswrapper[4678]: I1124 13:10:24.421269 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-9zvz7_5fbf7159-3ac4-4387-a4e5-c9a42cc9e035/kube-rbac-proxy/0.log" Nov 24 13:10:24 crc kubenswrapper[4678]: I1124 13:10:24.428637 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-9zvz7_5fbf7159-3ac4-4387-a4e5-c9a42cc9e035/manager/0.log" Nov 24 13:10:24 crc kubenswrapper[4678]: I1124 13:10:24.593469 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-vgv2l_6a9d3c2c-4f10-4d08-bade-aa93ac52e7be/kube-rbac-proxy/0.log" Nov 24 13:10:24 crc kubenswrapper[4678]: I1124 13:10:24.646126 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-vgv2l_6a9d3c2c-4f10-4d08-bade-aa93ac52e7be/manager/0.log" Nov 24 13:10:24 crc kubenswrapper[4678]: I1124 13:10:24.751383 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-7kbkq_42e3cbe3-ad98-46e4-9a27-497ad6ca2026/kube-rbac-proxy/0.log" Nov 24 13:10:24 crc kubenswrapper[4678]: I1124 13:10:24.847386 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-7kbkq_42e3cbe3-ad98-46e4-9a27-497ad6ca2026/manager/0.log" Nov 24 13:10:24 crc kubenswrapper[4678]: I1124 13:10:24.878706 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-wvz4p_be206532-b60c-4047-8835-1b57d1714883/kube-rbac-proxy/0.log" Nov 24 13:10:25 crc kubenswrapper[4678]: I1124 13:10:25.066370 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-wvz4p_be206532-b60c-4047-8835-1b57d1714883/manager/0.log" Nov 24 13:10:25 crc kubenswrapper[4678]: I1124 13:10:25.093652 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-q6kcx_32d872bd-6c15-4efa-9c97-9feeebf99191/manager/0.log" Nov 24 13:10:25 crc kubenswrapper[4678]: I1124 13:10:25.123264 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-q6kcx_32d872bd-6c15-4efa-9c97-9feeebf99191/kube-rbac-proxy/0.log" Nov 24 13:10:25 crc kubenswrapper[4678]: I1124 13:10:25.299209 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-rk24x_38bd8adb-717b-4ad8-af98-afe361890a1d/manager/0.log" Nov 24 13:10:25 crc kubenswrapper[4678]: I1124 13:10:25.319351 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-rk24x_38bd8adb-717b-4ad8-af98-afe361890a1d/kube-rbac-proxy/0.log" Nov 24 13:10:25 crc kubenswrapper[4678]: I1124 13:10:25.527859 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-b94c7cdcb-pd6lk_9312f8b9-ab92-4e86-8793-15eb73032357/kube-rbac-proxy/0.log" Nov 24 13:10:25 crc kubenswrapper[4678]: I1124 13:10:25.646029 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-9f56d7bd5-p4btp_e6986d07-7f65-41b6-bde9-a0d486e290dc/kube-rbac-proxy/0.log" Nov 24 13:10:25 crc kubenswrapper[4678]: I1124 13:10:25.919391 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gwlvt_0d8e008b-c58e-4697-bbb3-5b2c6def254f/registry-server/0.log" Nov 24 13:10:25 crc kubenswrapper[4678]: I1124 13:10:25.979865 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-9f56d7bd5-p4btp_e6986d07-7f65-41b6-bde9-a0d486e290dc/operator/0.log" Nov 24 13:10:26 crc kubenswrapper[4678]: I1124 13:10:26.191021 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-q6dxg_cf5a2355-2895-4522-b4dc-cca47eb2d33f/kube-rbac-proxy/0.log" Nov 24 13:10:26 crc kubenswrapper[4678]: I1124 13:10:26.302918 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-q6dxg_cf5a2355-2895-4522-b4dc-cca47eb2d33f/manager/0.log" Nov 24 13:10:26 crc kubenswrapper[4678]: I1124 13:10:26.421595 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-cj546_4599c525-39b6-412f-b668-79c5e575c42e/kube-rbac-proxy/0.log" Nov 24 13:10:26 crc kubenswrapper[4678]: I1124 13:10:26.460043 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-cj546_4599c525-39b6-412f-b668-79c5e575c42e/manager/0.log" Nov 24 13:10:26 crc kubenswrapper[4678]: I1124 13:10:26.641297 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-npq55_eff9ae6e-ce8e-4a8c-a862-4cb4e4e75560/operator/0.log" Nov 24 13:10:26 crc kubenswrapper[4678]: I1124 13:10:26.765831 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-x8n72_0fb5a95d-61ef-4850-ba59-0d637233ae88/kube-rbac-proxy/0.log" Nov 24 13:10:26 crc kubenswrapper[4678]: I1124 13:10:26.779221 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-x8n72_0fb5a95d-61ef-4850-ba59-0d637233ae88/manager/0.log" Nov 24 13:10:26 crc kubenswrapper[4678]: I1124 13:10:26.947106 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7d86657865-d4wl2_d494c9ab-cbef-4a2a-a865-2921ec2ab9e7/kube-rbac-proxy/0.log" Nov 24 13:10:27 crc kubenswrapper[4678]: I1124 13:10:27.018299 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-b94c7cdcb-pd6lk_9312f8b9-ab92-4e86-8793-15eb73032357/manager/0.log" Nov 24 13:10:27 crc kubenswrapper[4678]: I1124 13:10:27.206589 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-bts74_2e9318f0-ff18-4a7b-8a43-2c37c3d0d593/manager/0.log" Nov 24 13:10:27 crc kubenswrapper[4678]: I1124 13:10:27.229900 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-bts74_2e9318f0-ff18-4a7b-8a43-2c37c3d0d593/kube-rbac-proxy/0.log" Nov 24 13:10:27 crc kubenswrapper[4678]: I1124 13:10:27.290621 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7d86657865-d4wl2_d494c9ab-cbef-4a2a-a865-2921ec2ab9e7/manager/0.log" Nov 24 13:10:27 crc kubenswrapper[4678]: I1124 13:10:27.297333 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-5q2rm_61e95e5c-75b3-4d08-acdd-d28fa075a707/kube-rbac-proxy/0.log" Nov 24 13:10:27 crc kubenswrapper[4678]: I1124 13:10:27.400228 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-5q2rm_61e95e5c-75b3-4d08-acdd-d28fa075a707/manager/0.log" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.394564 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lvn25"] Nov 24 13:10:40 crc kubenswrapper[4678]: E1124 13:10:40.395866 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad77bbb-ec91-437d-9cb9-a8c29c299a2f" containerName="container-00" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.395885 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad77bbb-ec91-437d-9cb9-a8c29c299a2f" containerName="container-00" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.396192 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ad77bbb-ec91-437d-9cb9-a8c29c299a2f" containerName="container-00" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.402037 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.421657 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lvn25"] Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.470605 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bb37158-10fb-4049-9039-2f367592397f-catalog-content\") pod \"community-operators-lvn25\" (UID: \"7bb37158-10fb-4049-9039-2f367592397f\") " pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.470833 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bb37158-10fb-4049-9039-2f367592397f-utilities\") pod \"community-operators-lvn25\" (UID: \"7bb37158-10fb-4049-9039-2f367592397f\") " pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.470928 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjpbh\" (UniqueName: \"kubernetes.io/projected/7bb37158-10fb-4049-9039-2f367592397f-kube-api-access-tjpbh\") pod \"community-operators-lvn25\" (UID: \"7bb37158-10fb-4049-9039-2f367592397f\") " pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.572959 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bb37158-10fb-4049-9039-2f367592397f-utilities\") pod \"community-operators-lvn25\" (UID: \"7bb37158-10fb-4049-9039-2f367592397f\") " pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.573080 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjpbh\" (UniqueName: \"kubernetes.io/projected/7bb37158-10fb-4049-9039-2f367592397f-kube-api-access-tjpbh\") pod \"community-operators-lvn25\" (UID: \"7bb37158-10fb-4049-9039-2f367592397f\") " pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.573241 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bb37158-10fb-4049-9039-2f367592397f-catalog-content\") pod \"community-operators-lvn25\" (UID: \"7bb37158-10fb-4049-9039-2f367592397f\") " pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.574019 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bb37158-10fb-4049-9039-2f367592397f-catalog-content\") pod \"community-operators-lvn25\" (UID: \"7bb37158-10fb-4049-9039-2f367592397f\") " pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.574301 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bb37158-10fb-4049-9039-2f367592397f-utilities\") pod \"community-operators-lvn25\" (UID: \"7bb37158-10fb-4049-9039-2f367592397f\") " pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.579479 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8nm92"] Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.581994 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.597282 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8nm92"] Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.616601 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjpbh\" (UniqueName: \"kubernetes.io/projected/7bb37158-10fb-4049-9039-2f367592397f-kube-api-access-tjpbh\") pod \"community-operators-lvn25\" (UID: \"7bb37158-10fb-4049-9039-2f367592397f\") " pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.675571 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-catalog-content\") pod \"certified-operators-8nm92\" (UID: \"3da3fdff-5cd4-4612-b4d8-1f6e705a904b\") " pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.675676 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-utilities\") pod \"certified-operators-8nm92\" (UID: \"3da3fdff-5cd4-4612-b4d8-1f6e705a904b\") " pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.675733 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9575\" (UniqueName: \"kubernetes.io/projected/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-kube-api-access-n9575\") pod \"certified-operators-8nm92\" (UID: \"3da3fdff-5cd4-4612-b4d8-1f6e705a904b\") " pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.737466 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.779447 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-catalog-content\") pod \"certified-operators-8nm92\" (UID: \"3da3fdff-5cd4-4612-b4d8-1f6e705a904b\") " pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.779539 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-utilities\") pod \"certified-operators-8nm92\" (UID: \"3da3fdff-5cd4-4612-b4d8-1f6e705a904b\") " pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.779605 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9575\" (UniqueName: \"kubernetes.io/projected/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-kube-api-access-n9575\") pod \"certified-operators-8nm92\" (UID: \"3da3fdff-5cd4-4612-b4d8-1f6e705a904b\") " pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.780496 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-catalog-content\") pod \"certified-operators-8nm92\" (UID: \"3da3fdff-5cd4-4612-b4d8-1f6e705a904b\") " pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.780800 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-utilities\") pod \"certified-operators-8nm92\" (UID: \"3da3fdff-5cd4-4612-b4d8-1f6e705a904b\") " pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.817920 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9575\" (UniqueName: \"kubernetes.io/projected/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-kube-api-access-n9575\") pod \"certified-operators-8nm92\" (UID: \"3da3fdff-5cd4-4612-b4d8-1f6e705a904b\") " pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:10:40 crc kubenswrapper[4678]: I1124 13:10:40.972594 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:10:42 crc kubenswrapper[4678]: I1124 13:10:42.026421 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8nm92"] Nov 24 13:10:42 crc kubenswrapper[4678]: I1124 13:10:42.069557 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lvn25"] Nov 24 13:10:42 crc kubenswrapper[4678]: I1124 13:10:42.379642 4678 generic.go:334] "Generic (PLEG): container finished" podID="3da3fdff-5cd4-4612-b4d8-1f6e705a904b" containerID="7a4b92b195f004490112fdfce9063569872755ef062054daea818d5e806c82ed" exitCode=0 Nov 24 13:10:42 crc kubenswrapper[4678]: I1124 13:10:42.379728 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nm92" event={"ID":"3da3fdff-5cd4-4612-b4d8-1f6e705a904b","Type":"ContainerDied","Data":"7a4b92b195f004490112fdfce9063569872755ef062054daea818d5e806c82ed"} Nov 24 13:10:42 crc kubenswrapper[4678]: I1124 13:10:42.379846 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nm92" event={"ID":"3da3fdff-5cd4-4612-b4d8-1f6e705a904b","Type":"ContainerStarted","Data":"40d2afc0955fae7397c56f7cfe4305e5537aa473ea0013f82aafab83e900b39e"} Nov 24 13:10:42 crc kubenswrapper[4678]: I1124 13:10:42.385319 4678 generic.go:334] "Generic (PLEG): container finished" podID="7bb37158-10fb-4049-9039-2f367592397f" containerID="e3b6df709db6c2d02b392186f01928df30e33b3f4739415e0e81f813ec514fdb" exitCode=0 Nov 24 13:10:42 crc kubenswrapper[4678]: I1124 13:10:42.385378 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvn25" event={"ID":"7bb37158-10fb-4049-9039-2f367592397f","Type":"ContainerDied","Data":"e3b6df709db6c2d02b392186f01928df30e33b3f4739415e0e81f813ec514fdb"} Nov 24 13:10:42 crc kubenswrapper[4678]: I1124 13:10:42.385416 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvn25" event={"ID":"7bb37158-10fb-4049-9039-2f367592397f","Type":"ContainerStarted","Data":"f18f195e984e5bf15914fdc6c1c362cf07cca37af2eb43c95ee4ff350c58c9cd"} Nov 24 13:10:42 crc kubenswrapper[4678]: I1124 13:10:42.387613 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 13:10:43 crc kubenswrapper[4678]: I1124 13:10:43.399012 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvn25" event={"ID":"7bb37158-10fb-4049-9039-2f367592397f","Type":"ContainerStarted","Data":"f7f2dabcb2cfef1ef64c308f204f86ed75766f988e9363eee03d82180837d98d"} Nov 24 13:10:43 crc kubenswrapper[4678]: I1124 13:10:43.402055 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nm92" event={"ID":"3da3fdff-5cd4-4612-b4d8-1f6e705a904b","Type":"ContainerStarted","Data":"c58e422cb7b46ac1ce36677c890f36484f099f4d4c2bc9584c26d5aede21ab49"} Nov 24 13:10:43 crc kubenswrapper[4678]: I1124 13:10:43.609858 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c6qk8"] Nov 24 13:10:43 crc kubenswrapper[4678]: I1124 13:10:43.613115 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:10:43 crc kubenswrapper[4678]: I1124 13:10:43.625069 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c6qk8"] Nov 24 13:10:43 crc kubenswrapper[4678]: I1124 13:10:43.764282 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2blbf\" (UniqueName: \"kubernetes.io/projected/14b45d20-cc19-4c62-9c60-a42c3694aca5-kube-api-access-2blbf\") pod \"redhat-operators-c6qk8\" (UID: \"14b45d20-cc19-4c62-9c60-a42c3694aca5\") " pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:10:43 crc kubenswrapper[4678]: I1124 13:10:43.764495 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14b45d20-cc19-4c62-9c60-a42c3694aca5-catalog-content\") pod \"redhat-operators-c6qk8\" (UID: \"14b45d20-cc19-4c62-9c60-a42c3694aca5\") " pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:10:43 crc kubenswrapper[4678]: I1124 13:10:43.764544 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14b45d20-cc19-4c62-9c60-a42c3694aca5-utilities\") pod \"redhat-operators-c6qk8\" (UID: \"14b45d20-cc19-4c62-9c60-a42c3694aca5\") " pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:10:43 crc kubenswrapper[4678]: I1124 13:10:43.866727 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14b45d20-cc19-4c62-9c60-a42c3694aca5-catalog-content\") pod \"redhat-operators-c6qk8\" (UID: \"14b45d20-cc19-4c62-9c60-a42c3694aca5\") " pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:10:43 crc kubenswrapper[4678]: I1124 13:10:43.866843 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14b45d20-cc19-4c62-9c60-a42c3694aca5-utilities\") pod \"redhat-operators-c6qk8\" (UID: \"14b45d20-cc19-4c62-9c60-a42c3694aca5\") " pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:10:43 crc kubenswrapper[4678]: I1124 13:10:43.866960 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2blbf\" (UniqueName: \"kubernetes.io/projected/14b45d20-cc19-4c62-9c60-a42c3694aca5-kube-api-access-2blbf\") pod \"redhat-operators-c6qk8\" (UID: \"14b45d20-cc19-4c62-9c60-a42c3694aca5\") " pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:10:43 crc kubenswrapper[4678]: I1124 13:10:43.867295 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14b45d20-cc19-4c62-9c60-a42c3694aca5-catalog-content\") pod \"redhat-operators-c6qk8\" (UID: \"14b45d20-cc19-4c62-9c60-a42c3694aca5\") " pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:10:43 crc kubenswrapper[4678]: I1124 13:10:43.867719 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14b45d20-cc19-4c62-9c60-a42c3694aca5-utilities\") pod \"redhat-operators-c6qk8\" (UID: \"14b45d20-cc19-4c62-9c60-a42c3694aca5\") " pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:10:43 crc kubenswrapper[4678]: I1124 13:10:43.898641 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2blbf\" (UniqueName: \"kubernetes.io/projected/14b45d20-cc19-4c62-9c60-a42c3694aca5-kube-api-access-2blbf\") pod \"redhat-operators-c6qk8\" (UID: \"14b45d20-cc19-4c62-9c60-a42c3694aca5\") " pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:10:43 crc kubenswrapper[4678]: I1124 13:10:43.974229 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:10:44 crc kubenswrapper[4678]: I1124 13:10:44.671626 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c6qk8"] Nov 24 13:10:44 crc kubenswrapper[4678]: W1124 13:10:44.701620 4678 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b45d20_cc19_4c62_9c60_a42c3694aca5.slice/crio-27af58992a9eac5cb799bae24e1c0fd35970768fd8c055ed903ceee7c57c43c3 WatchSource:0}: Error finding container 27af58992a9eac5cb799bae24e1c0fd35970768fd8c055ed903ceee7c57c43c3: Status 404 returned error can't find the container with id 27af58992a9eac5cb799bae24e1c0fd35970768fd8c055ed903ceee7c57c43c3 Nov 24 13:10:45 crc kubenswrapper[4678]: I1124 13:10:45.428087 4678 generic.go:334] "Generic (PLEG): container finished" podID="14b45d20-cc19-4c62-9c60-a42c3694aca5" containerID="2e7e8457cefaad0d2c24f9ad57e765dfdd4fb3f94963f330cdd5b1b5e3c87bb5" exitCode=0 Nov 24 13:10:45 crc kubenswrapper[4678]: I1124 13:10:45.428157 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6qk8" event={"ID":"14b45d20-cc19-4c62-9c60-a42c3694aca5","Type":"ContainerDied","Data":"2e7e8457cefaad0d2c24f9ad57e765dfdd4fb3f94963f330cdd5b1b5e3c87bb5"} Nov 24 13:10:45 crc kubenswrapper[4678]: I1124 13:10:45.428722 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6qk8" event={"ID":"14b45d20-cc19-4c62-9c60-a42c3694aca5","Type":"ContainerStarted","Data":"27af58992a9eac5cb799bae24e1c0fd35970768fd8c055ed903ceee7c57c43c3"} Nov 24 13:10:45 crc kubenswrapper[4678]: I1124 13:10:45.644514 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5fwc2_45a91a43-cc29-4d11-b78b-27f24c8f89a1/control-plane-machine-set-operator/0.log" Nov 24 13:10:45 crc kubenswrapper[4678]: I1124 13:10:45.859612 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2qlj9_a44a8ca4-92df-406f-8ee7-37da7a5f6d8b/kube-rbac-proxy/0.log" Nov 24 13:10:45 crc kubenswrapper[4678]: I1124 13:10:45.973088 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2qlj9_a44a8ca4-92df-406f-8ee7-37da7a5f6d8b/machine-api-operator/0.log" Nov 24 13:10:46 crc kubenswrapper[4678]: I1124 13:10:46.441142 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6qk8" event={"ID":"14b45d20-cc19-4c62-9c60-a42c3694aca5","Type":"ContainerStarted","Data":"280433c6e8fe37798cd1f99e0c9ee78375a374303f897f829074f0d7d46beb80"} Nov 24 13:10:46 crc kubenswrapper[4678]: I1124 13:10:46.443294 4678 generic.go:334] "Generic (PLEG): container finished" podID="7bb37158-10fb-4049-9039-2f367592397f" containerID="f7f2dabcb2cfef1ef64c308f204f86ed75766f988e9363eee03d82180837d98d" exitCode=0 Nov 24 13:10:46 crc kubenswrapper[4678]: I1124 13:10:46.443389 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvn25" event={"ID":"7bb37158-10fb-4049-9039-2f367592397f","Type":"ContainerDied","Data":"f7f2dabcb2cfef1ef64c308f204f86ed75766f988e9363eee03d82180837d98d"} Nov 24 13:10:46 crc kubenswrapper[4678]: I1124 13:10:46.446410 4678 generic.go:334] "Generic (PLEG): container finished" podID="3da3fdff-5cd4-4612-b4d8-1f6e705a904b" containerID="c58e422cb7b46ac1ce36677c890f36484f099f4d4c2bc9584c26d5aede21ab49" exitCode=0 Nov 24 13:10:46 crc kubenswrapper[4678]: I1124 13:10:46.446452 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nm92" event={"ID":"3da3fdff-5cd4-4612-b4d8-1f6e705a904b","Type":"ContainerDied","Data":"c58e422cb7b46ac1ce36677c890f36484f099f4d4c2bc9584c26d5aede21ab49"} Nov 24 13:10:48 crc kubenswrapper[4678]: I1124 13:10:48.473924 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvn25" event={"ID":"7bb37158-10fb-4049-9039-2f367592397f","Type":"ContainerStarted","Data":"1be3c5951b3b4aa8d41579c872258a0b7bfc36e9f5fc969c455c14e475b2978a"} Nov 24 13:10:48 crc kubenswrapper[4678]: I1124 13:10:48.477867 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nm92" event={"ID":"3da3fdff-5cd4-4612-b4d8-1f6e705a904b","Type":"ContainerStarted","Data":"5066997be22cf26f1b3f0d95f30d25e70a1e43a97e693074450aa1aaae8f2945"} Nov 24 13:10:48 crc kubenswrapper[4678]: I1124 13:10:48.503624 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lvn25" podStartSLOduration=3.422556422 podStartE2EDuration="8.502442162s" podCreationTimestamp="2025-11-24 13:10:40 +0000 UTC" firstStartedPulling="2025-11-24 13:10:42.38992798 +0000 UTC m=+6853.320987619" lastFinishedPulling="2025-11-24 13:10:47.46981372 +0000 UTC m=+6858.400873359" observedRunningTime="2025-11-24 13:10:48.492213568 +0000 UTC m=+6859.423273237" watchObservedRunningTime="2025-11-24 13:10:48.502442162 +0000 UTC m=+6859.433501801" Nov 24 13:10:48 crc kubenswrapper[4678]: I1124 13:10:48.527602 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8nm92" podStartSLOduration=3.522902017 podStartE2EDuration="8.527570674s" podCreationTimestamp="2025-11-24 13:10:40 +0000 UTC" firstStartedPulling="2025-11-24 13:10:42.385789799 +0000 UTC m=+6853.316849438" lastFinishedPulling="2025-11-24 13:10:47.390458456 +0000 UTC m=+6858.321518095" observedRunningTime="2025-11-24 13:10:48.51507372 +0000 UTC m=+6859.446133379" watchObservedRunningTime="2025-11-24 13:10:48.527570674 +0000 UTC m=+6859.458630303" Nov 24 13:10:50 crc kubenswrapper[4678]: I1124 13:10:50.739116 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:10:50 crc kubenswrapper[4678]: I1124 13:10:50.739622 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:10:50 crc kubenswrapper[4678]: I1124 13:10:50.973758 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:10:50 crc kubenswrapper[4678]: I1124 13:10:50.974237 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:10:51 crc kubenswrapper[4678]: I1124 13:10:51.512938 4678 generic.go:334] "Generic (PLEG): container finished" podID="14b45d20-cc19-4c62-9c60-a42c3694aca5" containerID="280433c6e8fe37798cd1f99e0c9ee78375a374303f897f829074f0d7d46beb80" exitCode=0 Nov 24 13:10:51 crc kubenswrapper[4678]: I1124 13:10:51.512995 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6qk8" event={"ID":"14b45d20-cc19-4c62-9c60-a42c3694aca5","Type":"ContainerDied","Data":"280433c6e8fe37798cd1f99e0c9ee78375a374303f897f829074f0d7d46beb80"} Nov 24 13:10:51 crc kubenswrapper[4678]: I1124 13:10:51.866895 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lvn25" podUID="7bb37158-10fb-4049-9039-2f367592397f" containerName="registry-server" probeResult="failure" output=< Nov 24 13:10:51 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 13:10:51 crc kubenswrapper[4678]: > Nov 24 13:10:52 crc kubenswrapper[4678]: I1124 13:10:52.028186 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8nm92" podUID="3da3fdff-5cd4-4612-b4d8-1f6e705a904b" containerName="registry-server" probeResult="failure" output=< Nov 24 13:10:52 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 13:10:52 crc kubenswrapper[4678]: > Nov 24 13:10:52 crc kubenswrapper[4678]: I1124 13:10:52.535194 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6qk8" event={"ID":"14b45d20-cc19-4c62-9c60-a42c3694aca5","Type":"ContainerStarted","Data":"cde754362a127c46e3f23a69e1bcbdfac028ac5ec50a56de05132b1299da3a00"} Nov 24 13:10:52 crc kubenswrapper[4678]: I1124 13:10:52.558725 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c6qk8" podStartSLOduration=3.030273699 podStartE2EDuration="9.558691831s" podCreationTimestamp="2025-11-24 13:10:43 +0000 UTC" firstStartedPulling="2025-11-24 13:10:45.431264961 +0000 UTC m=+6856.362324600" lastFinishedPulling="2025-11-24 13:10:51.959683093 +0000 UTC m=+6862.890742732" observedRunningTime="2025-11-24 13:10:52.551396405 +0000 UTC m=+6863.482456044" watchObservedRunningTime="2025-11-24 13:10:52.558691831 +0000 UTC m=+6863.489751460" Nov 24 13:10:53 crc kubenswrapper[4678]: I1124 13:10:53.975447 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:10:53 crc kubenswrapper[4678]: I1124 13:10:53.975773 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:10:55 crc kubenswrapper[4678]: I1124 13:10:55.029884 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c6qk8" podUID="14b45d20-cc19-4c62-9c60-a42c3694aca5" containerName="registry-server" probeResult="failure" output=< Nov 24 13:10:55 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 13:10:55 crc kubenswrapper[4678]: > Nov 24 13:11:00 crc kubenswrapper[4678]: I1124 13:11:00.342513 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-xh5hn_cd465141-2168-436c-a685-2eb559e2bcb8/cert-manager-controller/0.log" Nov 24 13:11:00 crc kubenswrapper[4678]: I1124 13:11:00.581740 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-ff799_a15c5721-1751-4a87-b3ba-e13cefc0153c/cert-manager-cainjector/0.log" Nov 24 13:11:00 crc kubenswrapper[4678]: I1124 13:11:00.649345 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-cf7d2_51188e3b-bda3-4291-b54f-1abb414dd320/cert-manager-webhook/0.log" Nov 24 13:11:01 crc kubenswrapper[4678]: I1124 13:11:01.804279 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lvn25" podUID="7bb37158-10fb-4049-9039-2f367592397f" containerName="registry-server" probeResult="failure" output=< Nov 24 13:11:01 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 13:11:01 crc kubenswrapper[4678]: > Nov 24 13:11:02 crc kubenswrapper[4678]: I1124 13:11:02.027317 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8nm92" podUID="3da3fdff-5cd4-4612-b4d8-1f6e705a904b" containerName="registry-server" probeResult="failure" output=< Nov 24 13:11:02 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 13:11:02 crc kubenswrapper[4678]: > Nov 24 13:11:05 crc kubenswrapper[4678]: I1124 13:11:05.071853 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c6qk8" podUID="14b45d20-cc19-4c62-9c60-a42c3694aca5" containerName="registry-server" probeResult="failure" output=< Nov 24 13:11:05 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 13:11:05 crc kubenswrapper[4678]: > Nov 24 13:11:10 crc kubenswrapper[4678]: I1124 13:11:10.797977 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:11:10 crc kubenswrapper[4678]: I1124 13:11:10.859815 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:11:11 crc kubenswrapper[4678]: I1124 13:11:11.029824 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:11:11 crc kubenswrapper[4678]: I1124 13:11:11.081798 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:11:11 crc kubenswrapper[4678]: I1124 13:11:11.990768 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lvn25"] Nov 24 13:11:12 crc kubenswrapper[4678]: I1124 13:11:12.750002 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lvn25" podUID="7bb37158-10fb-4049-9039-2f367592397f" containerName="registry-server" containerID="cri-o://1be3c5951b3b4aa8d41579c872258a0b7bfc36e9f5fc969c455c14e475b2978a" gracePeriod=2 Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.380922 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.398347 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8nm92"] Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.398615 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8nm92" podUID="3da3fdff-5cd4-4612-b4d8-1f6e705a904b" containerName="registry-server" containerID="cri-o://5066997be22cf26f1b3f0d95f30d25e70a1e43a97e693074450aa1aaae8f2945" gracePeriod=2 Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.448998 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjpbh\" (UniqueName: \"kubernetes.io/projected/7bb37158-10fb-4049-9039-2f367592397f-kube-api-access-tjpbh\") pod \"7bb37158-10fb-4049-9039-2f367592397f\" (UID: \"7bb37158-10fb-4049-9039-2f367592397f\") " Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.449303 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bb37158-10fb-4049-9039-2f367592397f-utilities\") pod \"7bb37158-10fb-4049-9039-2f367592397f\" (UID: \"7bb37158-10fb-4049-9039-2f367592397f\") " Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.449345 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bb37158-10fb-4049-9039-2f367592397f-catalog-content\") pod \"7bb37158-10fb-4049-9039-2f367592397f\" (UID: \"7bb37158-10fb-4049-9039-2f367592397f\") " Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.451594 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bb37158-10fb-4049-9039-2f367592397f-utilities" (OuterVolumeSpecName: "utilities") pod "7bb37158-10fb-4049-9039-2f367592397f" (UID: "7bb37158-10fb-4049-9039-2f367592397f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.480171 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb37158-10fb-4049-9039-2f367592397f-kube-api-access-tjpbh" (OuterVolumeSpecName: "kube-api-access-tjpbh") pod "7bb37158-10fb-4049-9039-2f367592397f" (UID: "7bb37158-10fb-4049-9039-2f367592397f"). InnerVolumeSpecName "kube-api-access-tjpbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.526786 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bb37158-10fb-4049-9039-2f367592397f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bb37158-10fb-4049-9039-2f367592397f" (UID: "7bb37158-10fb-4049-9039-2f367592397f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.552414 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjpbh\" (UniqueName: \"kubernetes.io/projected/7bb37158-10fb-4049-9039-2f367592397f-kube-api-access-tjpbh\") on node \"crc\" DevicePath \"\"" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.552454 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bb37158-10fb-4049-9039-2f367592397f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.552469 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bb37158-10fb-4049-9039-2f367592397f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.778589 4678 generic.go:334] "Generic (PLEG): container finished" podID="7bb37158-10fb-4049-9039-2f367592397f" containerID="1be3c5951b3b4aa8d41579c872258a0b7bfc36e9f5fc969c455c14e475b2978a" exitCode=0 Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.778927 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvn25" event={"ID":"7bb37158-10fb-4049-9039-2f367592397f","Type":"ContainerDied","Data":"1be3c5951b3b4aa8d41579c872258a0b7bfc36e9f5fc969c455c14e475b2978a"} Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.778980 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvn25" event={"ID":"7bb37158-10fb-4049-9039-2f367592397f","Type":"ContainerDied","Data":"f18f195e984e5bf15914fdc6c1c362cf07cca37af2eb43c95ee4ff350c58c9cd"} Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.778985 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvn25" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.779024 4678 scope.go:117] "RemoveContainer" containerID="1be3c5951b3b4aa8d41579c872258a0b7bfc36e9f5fc969c455c14e475b2978a" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.797601 4678 generic.go:334] "Generic (PLEG): container finished" podID="3da3fdff-5cd4-4612-b4d8-1f6e705a904b" containerID="5066997be22cf26f1b3f0d95f30d25e70a1e43a97e693074450aa1aaae8f2945" exitCode=0 Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.797691 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nm92" event={"ID":"3da3fdff-5cd4-4612-b4d8-1f6e705a904b","Type":"ContainerDied","Data":"5066997be22cf26f1b3f0d95f30d25e70a1e43a97e693074450aa1aaae8f2945"} Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.818069 4678 scope.go:117] "RemoveContainer" containerID="f7f2dabcb2cfef1ef64c308f204f86ed75766f988e9363eee03d82180837d98d" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.842743 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lvn25"] Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.853819 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lvn25"] Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.866018 4678 scope.go:117] "RemoveContainer" containerID="e3b6df709db6c2d02b392186f01928df30e33b3f4739415e0e81f813ec514fdb" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.912603 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb37158-10fb-4049-9039-2f367592397f" path="/var/lib/kubelet/pods/7bb37158-10fb-4049-9039-2f367592397f/volumes" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.955172 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.972347 4678 scope.go:117] "RemoveContainer" containerID="1be3c5951b3b4aa8d41579c872258a0b7bfc36e9f5fc969c455c14e475b2978a" Nov 24 13:11:13 crc kubenswrapper[4678]: E1124 13:11:13.976095 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1be3c5951b3b4aa8d41579c872258a0b7bfc36e9f5fc969c455c14e475b2978a\": container with ID starting with 1be3c5951b3b4aa8d41579c872258a0b7bfc36e9f5fc969c455c14e475b2978a not found: ID does not exist" containerID="1be3c5951b3b4aa8d41579c872258a0b7bfc36e9f5fc969c455c14e475b2978a" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.976145 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1be3c5951b3b4aa8d41579c872258a0b7bfc36e9f5fc969c455c14e475b2978a"} err="failed to get container status \"1be3c5951b3b4aa8d41579c872258a0b7bfc36e9f5fc969c455c14e475b2978a\": rpc error: code = NotFound desc = could not find container \"1be3c5951b3b4aa8d41579c872258a0b7bfc36e9f5fc969c455c14e475b2978a\": container with ID starting with 1be3c5951b3b4aa8d41579c872258a0b7bfc36e9f5fc969c455c14e475b2978a not found: ID does not exist" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.976170 4678 scope.go:117] "RemoveContainer" containerID="f7f2dabcb2cfef1ef64c308f204f86ed75766f988e9363eee03d82180837d98d" Nov 24 13:11:13 crc kubenswrapper[4678]: E1124 13:11:13.979770 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7f2dabcb2cfef1ef64c308f204f86ed75766f988e9363eee03d82180837d98d\": container with ID starting with f7f2dabcb2cfef1ef64c308f204f86ed75766f988e9363eee03d82180837d98d not found: ID does not exist" containerID="f7f2dabcb2cfef1ef64c308f204f86ed75766f988e9363eee03d82180837d98d" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.979810 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7f2dabcb2cfef1ef64c308f204f86ed75766f988e9363eee03d82180837d98d"} err="failed to get container status \"f7f2dabcb2cfef1ef64c308f204f86ed75766f988e9363eee03d82180837d98d\": rpc error: code = NotFound desc = could not find container \"f7f2dabcb2cfef1ef64c308f204f86ed75766f988e9363eee03d82180837d98d\": container with ID starting with f7f2dabcb2cfef1ef64c308f204f86ed75766f988e9363eee03d82180837d98d not found: ID does not exist" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.979838 4678 scope.go:117] "RemoveContainer" containerID="e3b6df709db6c2d02b392186f01928df30e33b3f4739415e0e81f813ec514fdb" Nov 24 13:11:13 crc kubenswrapper[4678]: E1124 13:11:13.980126 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3b6df709db6c2d02b392186f01928df30e33b3f4739415e0e81f813ec514fdb\": container with ID starting with e3b6df709db6c2d02b392186f01928df30e33b3f4739415e0e81f813ec514fdb not found: ID does not exist" containerID="e3b6df709db6c2d02b392186f01928df30e33b3f4739415e0e81f813ec514fdb" Nov 24 13:11:13 crc kubenswrapper[4678]: I1124 13:11:13.980160 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3b6df709db6c2d02b392186f01928df30e33b3f4739415e0e81f813ec514fdb"} err="failed to get container status \"e3b6df709db6c2d02b392186f01928df30e33b3f4739415e0e81f813ec514fdb\": rpc error: code = NotFound desc = could not find container \"e3b6df709db6c2d02b392186f01928df30e33b3f4739415e0e81f813ec514fdb\": container with ID starting with e3b6df709db6c2d02b392186f01928df30e33b3f4739415e0e81f813ec514fdb not found: ID does not exist" Nov 24 13:11:14 crc kubenswrapper[4678]: I1124 13:11:14.068218 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-catalog-content\") pod \"3da3fdff-5cd4-4612-b4d8-1f6e705a904b\" (UID: \"3da3fdff-5cd4-4612-b4d8-1f6e705a904b\") " Nov 24 13:11:14 crc kubenswrapper[4678]: I1124 13:11:14.068400 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-utilities\") pod \"3da3fdff-5cd4-4612-b4d8-1f6e705a904b\" (UID: \"3da3fdff-5cd4-4612-b4d8-1f6e705a904b\") " Nov 24 13:11:14 crc kubenswrapper[4678]: I1124 13:11:14.068443 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9575\" (UniqueName: \"kubernetes.io/projected/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-kube-api-access-n9575\") pod \"3da3fdff-5cd4-4612-b4d8-1f6e705a904b\" (UID: \"3da3fdff-5cd4-4612-b4d8-1f6e705a904b\") " Nov 24 13:11:14 crc kubenswrapper[4678]: I1124 13:11:14.071641 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-utilities" (OuterVolumeSpecName: "utilities") pod "3da3fdff-5cd4-4612-b4d8-1f6e705a904b" (UID: "3da3fdff-5cd4-4612-b4d8-1f6e705a904b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:11:14 crc kubenswrapper[4678]: I1124 13:11:14.075605 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-kube-api-access-n9575" (OuterVolumeSpecName: "kube-api-access-n9575") pod "3da3fdff-5cd4-4612-b4d8-1f6e705a904b" (UID: "3da3fdff-5cd4-4612-b4d8-1f6e705a904b"). InnerVolumeSpecName "kube-api-access-n9575". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:11:14 crc kubenswrapper[4678]: I1124 13:11:14.150612 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3da3fdff-5cd4-4612-b4d8-1f6e705a904b" (UID: "3da3fdff-5cd4-4612-b4d8-1f6e705a904b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:11:14 crc kubenswrapper[4678]: I1124 13:11:14.173453 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:11:14 crc kubenswrapper[4678]: I1124 13:11:14.173758 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:11:14 crc kubenswrapper[4678]: I1124 13:11:14.173847 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9575\" (UniqueName: \"kubernetes.io/projected/3da3fdff-5cd4-4612-b4d8-1f6e705a904b-kube-api-access-n9575\") on node \"crc\" DevicePath \"\"" Nov 24 13:11:14 crc kubenswrapper[4678]: I1124 13:11:14.816208 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nm92" event={"ID":"3da3fdff-5cd4-4612-b4d8-1f6e705a904b","Type":"ContainerDied","Data":"40d2afc0955fae7397c56f7cfe4305e5537aa473ea0013f82aafab83e900b39e"} Nov 24 13:11:14 crc kubenswrapper[4678]: I1124 13:11:14.816267 4678 scope.go:117] "RemoveContainer" containerID="5066997be22cf26f1b3f0d95f30d25e70a1e43a97e693074450aa1aaae8f2945" Nov 24 13:11:14 crc kubenswrapper[4678]: I1124 13:11:14.816357 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8nm92" Nov 24 13:11:14 crc kubenswrapper[4678]: I1124 13:11:14.840295 4678 scope.go:117] "RemoveContainer" containerID="c58e422cb7b46ac1ce36677c890f36484f099f4d4c2bc9584c26d5aede21ab49" Nov 24 13:11:14 crc kubenswrapper[4678]: I1124 13:11:14.867804 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8nm92"] Nov 24 13:11:14 crc kubenswrapper[4678]: I1124 13:11:14.875885 4678 scope.go:117] "RemoveContainer" containerID="7a4b92b195f004490112fdfce9063569872755ef062054daea818d5e806c82ed" Nov 24 13:11:14 crc kubenswrapper[4678]: I1124 13:11:14.880755 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8nm92"] Nov 24 13:11:15 crc kubenswrapper[4678]: I1124 13:11:15.037669 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c6qk8" podUID="14b45d20-cc19-4c62-9c60-a42c3694aca5" containerName="registry-server" probeResult="failure" output=< Nov 24 13:11:15 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 13:11:15 crc kubenswrapper[4678]: > Nov 24 13:11:15 crc kubenswrapper[4678]: I1124 13:11:15.422758 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-82t9b_81b7f8b9-a0c2-4ef0-9c5e-73b899434dc9/nmstate-console-plugin/0.log" Nov 24 13:11:15 crc kubenswrapper[4678]: I1124 13:11:15.676347 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-bjlbs_40792d21-2a53-4dba-9895-127d9414e802/nmstate-handler/0.log" Nov 24 13:11:15 crc kubenswrapper[4678]: I1124 13:11:15.684479 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-xd4vt_1be5edf4-f534-4d7b-ac82-27c9f7ea1e65/kube-rbac-proxy/0.log" Nov 24 13:11:15 crc kubenswrapper[4678]: I1124 13:11:15.761525 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-xd4vt_1be5edf4-f534-4d7b-ac82-27c9f7ea1e65/nmstate-metrics/0.log" Nov 24 13:11:15 crc kubenswrapper[4678]: I1124 13:11:15.897635 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-nwsr8_93a91ea7-eb3e-4e3d-b0e7-8fc451fb9106/nmstate-operator/0.log" Nov 24 13:11:15 crc kubenswrapper[4678]: I1124 13:11:15.909575 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da3fdff-5cd4-4612-b4d8-1f6e705a904b" path="/var/lib/kubelet/pods/3da3fdff-5cd4-4612-b4d8-1f6e705a904b/volumes" Nov 24 13:11:15 crc kubenswrapper[4678]: I1124 13:11:15.999565 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-c2tjw_9d6c6722-a205-4130-8e09-ee82c51491a9/nmstate-webhook/0.log" Nov 24 13:11:25 crc kubenswrapper[4678]: I1124 13:11:25.023360 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c6qk8" podUID="14b45d20-cc19-4c62-9c60-a42c3694aca5" containerName="registry-server" probeResult="failure" output=< Nov 24 13:11:25 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 13:11:25 crc kubenswrapper[4678]: > Nov 24 13:11:29 crc kubenswrapper[4678]: I1124 13:11:29.742158 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7b9848658c-p2tjh_77532de8-8fa2-4555-a740-5b2f22acc429/kube-rbac-proxy/0.log" Nov 24 13:11:29 crc kubenswrapper[4678]: I1124 13:11:29.827906 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7b9848658c-p2tjh_77532de8-8fa2-4555-a740-5b2f22acc429/manager/0.log" Nov 24 13:11:34 crc kubenswrapper[4678]: I1124 13:11:34.026078 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:11:34 crc kubenswrapper[4678]: I1124 13:11:34.085052 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:11:34 crc kubenswrapper[4678]: I1124 13:11:34.280886 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c6qk8"] Nov 24 13:11:35 crc kubenswrapper[4678]: I1124 13:11:35.050613 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c6qk8" podUID="14b45d20-cc19-4c62-9c60-a42c3694aca5" containerName="registry-server" containerID="cri-o://cde754362a127c46e3f23a69e1bcbdfac028ac5ec50a56de05132b1299da3a00" gracePeriod=2 Nov 24 13:11:35 crc kubenswrapper[4678]: I1124 13:11:35.605226 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:11:35 crc kubenswrapper[4678]: I1124 13:11:35.712378 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14b45d20-cc19-4c62-9c60-a42c3694aca5-utilities\") pod \"14b45d20-cc19-4c62-9c60-a42c3694aca5\" (UID: \"14b45d20-cc19-4c62-9c60-a42c3694aca5\") " Nov 24 13:11:35 crc kubenswrapper[4678]: I1124 13:11:35.712607 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2blbf\" (UniqueName: \"kubernetes.io/projected/14b45d20-cc19-4c62-9c60-a42c3694aca5-kube-api-access-2blbf\") pod \"14b45d20-cc19-4c62-9c60-a42c3694aca5\" (UID: \"14b45d20-cc19-4c62-9c60-a42c3694aca5\") " Nov 24 13:11:35 crc kubenswrapper[4678]: I1124 13:11:35.712799 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14b45d20-cc19-4c62-9c60-a42c3694aca5-catalog-content\") pod \"14b45d20-cc19-4c62-9c60-a42c3694aca5\" (UID: \"14b45d20-cc19-4c62-9c60-a42c3694aca5\") " Nov 24 13:11:35 crc kubenswrapper[4678]: I1124 13:11:35.713507 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14b45d20-cc19-4c62-9c60-a42c3694aca5-utilities" (OuterVolumeSpecName: "utilities") pod "14b45d20-cc19-4c62-9c60-a42c3694aca5" (UID: "14b45d20-cc19-4c62-9c60-a42c3694aca5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:11:35 crc kubenswrapper[4678]: I1124 13:11:35.723848 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b45d20-cc19-4c62-9c60-a42c3694aca5-kube-api-access-2blbf" (OuterVolumeSpecName: "kube-api-access-2blbf") pod "14b45d20-cc19-4c62-9c60-a42c3694aca5" (UID: "14b45d20-cc19-4c62-9c60-a42c3694aca5"). InnerVolumeSpecName "kube-api-access-2blbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:11:35 crc kubenswrapper[4678]: I1124 13:11:35.797871 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14b45d20-cc19-4c62-9c60-a42c3694aca5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14b45d20-cc19-4c62-9c60-a42c3694aca5" (UID: "14b45d20-cc19-4c62-9c60-a42c3694aca5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:11:35 crc kubenswrapper[4678]: I1124 13:11:35.815879 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14b45d20-cc19-4c62-9c60-a42c3694aca5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:11:35 crc kubenswrapper[4678]: I1124 13:11:35.815922 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14b45d20-cc19-4c62-9c60-a42c3694aca5-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:11:35 crc kubenswrapper[4678]: I1124 13:11:35.815932 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2blbf\" (UniqueName: \"kubernetes.io/projected/14b45d20-cc19-4c62-9c60-a42c3694aca5-kube-api-access-2blbf\") on node \"crc\" DevicePath \"\"" Nov 24 13:11:36 crc kubenswrapper[4678]: I1124 13:11:36.062986 4678 generic.go:334] "Generic (PLEG): container finished" podID="14b45d20-cc19-4c62-9c60-a42c3694aca5" containerID="cde754362a127c46e3f23a69e1bcbdfac028ac5ec50a56de05132b1299da3a00" exitCode=0 Nov 24 13:11:36 crc kubenswrapper[4678]: I1124 13:11:36.063037 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6qk8" event={"ID":"14b45d20-cc19-4c62-9c60-a42c3694aca5","Type":"ContainerDied","Data":"cde754362a127c46e3f23a69e1bcbdfac028ac5ec50a56de05132b1299da3a00"} Nov 24 13:11:36 crc kubenswrapper[4678]: I1124 13:11:36.063061 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6qk8" Nov 24 13:11:36 crc kubenswrapper[4678]: I1124 13:11:36.063086 4678 scope.go:117] "RemoveContainer" containerID="cde754362a127c46e3f23a69e1bcbdfac028ac5ec50a56de05132b1299da3a00" Nov 24 13:11:36 crc kubenswrapper[4678]: I1124 13:11:36.063071 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6qk8" event={"ID":"14b45d20-cc19-4c62-9c60-a42c3694aca5","Type":"ContainerDied","Data":"27af58992a9eac5cb799bae24e1c0fd35970768fd8c055ed903ceee7c57c43c3"} Nov 24 13:11:36 crc kubenswrapper[4678]: I1124 13:11:36.091782 4678 scope.go:117] "RemoveContainer" containerID="280433c6e8fe37798cd1f99e0c9ee78375a374303f897f829074f0d7d46beb80" Nov 24 13:11:36 crc kubenswrapper[4678]: I1124 13:11:36.111106 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c6qk8"] Nov 24 13:11:36 crc kubenswrapper[4678]: I1124 13:11:36.118540 4678 scope.go:117] "RemoveContainer" containerID="2e7e8457cefaad0d2c24f9ad57e765dfdd4fb3f94963f330cdd5b1b5e3c87bb5" Nov 24 13:11:36 crc kubenswrapper[4678]: I1124 13:11:36.126561 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c6qk8"] Nov 24 13:11:36 crc kubenswrapper[4678]: I1124 13:11:36.174147 4678 scope.go:117] "RemoveContainer" containerID="cde754362a127c46e3f23a69e1bcbdfac028ac5ec50a56de05132b1299da3a00" Nov 24 13:11:36 crc kubenswrapper[4678]: E1124 13:11:36.174691 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cde754362a127c46e3f23a69e1bcbdfac028ac5ec50a56de05132b1299da3a00\": container with ID starting with cde754362a127c46e3f23a69e1bcbdfac028ac5ec50a56de05132b1299da3a00 not found: ID does not exist" containerID="cde754362a127c46e3f23a69e1bcbdfac028ac5ec50a56de05132b1299da3a00" Nov 24 13:11:36 crc kubenswrapper[4678]: I1124 13:11:36.174741 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cde754362a127c46e3f23a69e1bcbdfac028ac5ec50a56de05132b1299da3a00"} err="failed to get container status \"cde754362a127c46e3f23a69e1bcbdfac028ac5ec50a56de05132b1299da3a00\": rpc error: code = NotFound desc = could not find container \"cde754362a127c46e3f23a69e1bcbdfac028ac5ec50a56de05132b1299da3a00\": container with ID starting with cde754362a127c46e3f23a69e1bcbdfac028ac5ec50a56de05132b1299da3a00 not found: ID does not exist" Nov 24 13:11:36 crc kubenswrapper[4678]: I1124 13:11:36.174772 4678 scope.go:117] "RemoveContainer" containerID="280433c6e8fe37798cd1f99e0c9ee78375a374303f897f829074f0d7d46beb80" Nov 24 13:11:36 crc kubenswrapper[4678]: E1124 13:11:36.175212 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"280433c6e8fe37798cd1f99e0c9ee78375a374303f897f829074f0d7d46beb80\": container with ID starting with 280433c6e8fe37798cd1f99e0c9ee78375a374303f897f829074f0d7d46beb80 not found: ID does not exist" containerID="280433c6e8fe37798cd1f99e0c9ee78375a374303f897f829074f0d7d46beb80" Nov 24 13:11:36 crc kubenswrapper[4678]: I1124 13:11:36.175299 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"280433c6e8fe37798cd1f99e0c9ee78375a374303f897f829074f0d7d46beb80"} err="failed to get container status \"280433c6e8fe37798cd1f99e0c9ee78375a374303f897f829074f0d7d46beb80\": rpc error: code = NotFound desc = could not find container \"280433c6e8fe37798cd1f99e0c9ee78375a374303f897f829074f0d7d46beb80\": container with ID starting with 280433c6e8fe37798cd1f99e0c9ee78375a374303f897f829074f0d7d46beb80 not found: ID does not exist" Nov 24 13:11:36 crc kubenswrapper[4678]: I1124 13:11:36.175326 4678 scope.go:117] "RemoveContainer" containerID="2e7e8457cefaad0d2c24f9ad57e765dfdd4fb3f94963f330cdd5b1b5e3c87bb5" Nov 24 13:11:36 crc kubenswrapper[4678]: E1124 13:11:36.175965 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e7e8457cefaad0d2c24f9ad57e765dfdd4fb3f94963f330cdd5b1b5e3c87bb5\": container with ID starting with 2e7e8457cefaad0d2c24f9ad57e765dfdd4fb3f94963f330cdd5b1b5e3c87bb5 not found: ID does not exist" containerID="2e7e8457cefaad0d2c24f9ad57e765dfdd4fb3f94963f330cdd5b1b5e3c87bb5" Nov 24 13:11:36 crc kubenswrapper[4678]: I1124 13:11:36.176056 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e7e8457cefaad0d2c24f9ad57e765dfdd4fb3f94963f330cdd5b1b5e3c87bb5"} err="failed to get container status \"2e7e8457cefaad0d2c24f9ad57e765dfdd4fb3f94963f330cdd5b1b5e3c87bb5\": rpc error: code = NotFound desc = could not find container \"2e7e8457cefaad0d2c24f9ad57e765dfdd4fb3f94963f330cdd5b1b5e3c87bb5\": container with ID starting with 2e7e8457cefaad0d2c24f9ad57e765dfdd4fb3f94963f330cdd5b1b5e3c87bb5 not found: ID does not exist" Nov 24 13:11:37 crc kubenswrapper[4678]: I1124 13:11:37.911598 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14b45d20-cc19-4c62-9c60-a42c3694aca5" path="/var/lib/kubelet/pods/14b45d20-cc19-4c62-9c60-a42c3694aca5/volumes" Nov 24 13:11:43 crc kubenswrapper[4678]: I1124 13:11:43.762281 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-kz7kw_d34e2349-d9d9-47e5-a6ea-cf3fd54efe8f/cluster-logging-operator/0.log" Nov 24 13:11:43 crc kubenswrapper[4678]: I1124 13:11:43.946302 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-srddw_06c7953f-f0d2-4db1-b53e-633539ce1c56/collector/0.log" Nov 24 13:11:43 crc kubenswrapper[4678]: I1124 13:11:43.967070 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_a1a95c24-e0a9-4acb-a52c-7face078ba60/loki-compactor/0.log" Nov 24 13:11:44 crc kubenswrapper[4678]: I1124 13:11:44.114121 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-jwzsf_f4833108-5c1f-4961-bb34-9bb438a1c4ef/loki-distributor/0.log" Nov 24 13:11:44 crc kubenswrapper[4678]: I1124 13:11:44.176079 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-88ddc8cf9-2ldpd_0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2/gateway/0.log" Nov 24 13:11:44 crc kubenswrapper[4678]: I1124 13:11:44.197947 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-88ddc8cf9-2ldpd_0f9ea313-9b5b-4ed6-a78e-d1a5e11c2ea2/opa/0.log" Nov 24 13:11:44 crc kubenswrapper[4678]: I1124 13:11:44.315059 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-88ddc8cf9-5hnj5_3b4a9171-61ec-4c11-ad33-cf613849ac75/gateway/0.log" Nov 24 13:11:44 crc kubenswrapper[4678]: I1124 13:11:44.379839 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-88ddc8cf9-5hnj5_3b4a9171-61ec-4c11-ad33-cf613849ac75/opa/0.log" Nov 24 13:11:44 crc kubenswrapper[4678]: I1124 13:11:44.510603 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_7ee7bb9f-9ca9-491b-820c-d6e359bb06ec/loki-index-gateway/0.log" Nov 24 13:11:44 crc kubenswrapper[4678]: I1124 13:11:44.713766 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_bae3408d-f5fc-4bc0-b911-69de95e61536/loki-ingester/0.log" Nov 24 13:11:44 crc kubenswrapper[4678]: I1124 13:11:44.887018 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-dbwm8_b85d2201-78d6-477e-a798-2096dc5b916a/loki-querier/0.log" Nov 24 13:11:44 crc kubenswrapper[4678]: I1124 13:11:44.897949 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-zw4kq_e50b79b7-550a-4135-9a07-71ba28340eb6/loki-query-frontend/0.log" Nov 24 13:11:58 crc kubenswrapper[4678]: I1124 13:11:58.935851 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-89x9s_d6d53fc3-a79e-4249-86ab-e7588111b6ba/kube-rbac-proxy/0.log" Nov 24 13:11:59 crc kubenswrapper[4678]: I1124 13:11:59.125952 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-89x9s_d6d53fc3-a79e-4249-86ab-e7588111b6ba/controller/0.log" Nov 24 13:11:59 crc kubenswrapper[4678]: I1124 13:11:59.176257 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/cp-frr-files/0.log" Nov 24 13:11:59 crc kubenswrapper[4678]: I1124 13:11:59.363516 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/cp-frr-files/0.log" Nov 24 13:11:59 crc kubenswrapper[4678]: I1124 13:11:59.403591 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/cp-metrics/0.log" Nov 24 13:11:59 crc kubenswrapper[4678]: I1124 13:11:59.443559 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/cp-reloader/0.log" Nov 24 13:11:59 crc kubenswrapper[4678]: I1124 13:11:59.450342 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/cp-reloader/0.log" Nov 24 13:11:59 crc kubenswrapper[4678]: I1124 13:11:59.690227 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/cp-metrics/0.log" Nov 24 13:11:59 crc kubenswrapper[4678]: I1124 13:11:59.707548 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/cp-frr-files/0.log" Nov 24 13:11:59 crc kubenswrapper[4678]: I1124 13:11:59.711425 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/cp-reloader/0.log" Nov 24 13:11:59 crc kubenswrapper[4678]: I1124 13:11:59.730433 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/cp-metrics/0.log" Nov 24 13:11:59 crc kubenswrapper[4678]: I1124 13:11:59.877878 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/cp-frr-files/0.log" Nov 24 13:11:59 crc kubenswrapper[4678]: I1124 13:11:59.906477 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/cp-metrics/0.log" Nov 24 13:11:59 crc kubenswrapper[4678]: I1124 13:11:59.948780 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/controller/0.log" Nov 24 13:11:59 crc kubenswrapper[4678]: I1124 13:11:59.957490 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/cp-reloader/0.log" Nov 24 13:12:00 crc kubenswrapper[4678]: I1124 13:12:00.123457 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/frr-metrics/0.log" Nov 24 13:12:00 crc kubenswrapper[4678]: I1124 13:12:00.181283 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/kube-rbac-proxy/0.log" Nov 24 13:12:00 crc kubenswrapper[4678]: I1124 13:12:00.241957 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/kube-rbac-proxy-frr/0.log" Nov 24 13:12:00 crc kubenswrapper[4678]: I1124 13:12:00.296403 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:12:00 crc kubenswrapper[4678]: I1124 13:12:00.296454 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:12:00 crc kubenswrapper[4678]: I1124 13:12:00.399143 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/reloader/0.log" Nov 24 13:12:00 crc kubenswrapper[4678]: I1124 13:12:00.528577 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-266dw_205823f2-053a-4c0b-9e24-debc45170c30/frr-k8s-webhook-server/0.log" Nov 24 13:12:00 crc kubenswrapper[4678]: I1124 13:12:00.739768 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-57c67cd666-lmh62_59194b72-d4c7-47a0-8cb2-b61ea454172c/manager/0.log" Nov 24 13:12:00 crc kubenswrapper[4678]: I1124 13:12:00.996949 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5bccc67d9d-ndzx9_21c5aca7-95a6-4f08-96b8-4beca12e41cf/webhook-server/0.log" Nov 24 13:12:01 crc kubenswrapper[4678]: I1124 13:12:01.024430 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-p9g7l_9737f178-41ad-4deb-9d13-4245d6a31868/kube-rbac-proxy/0.log" Nov 24 13:12:02 crc kubenswrapper[4678]: I1124 13:12:02.011981 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-p9g7l_9737f178-41ad-4deb-9d13-4245d6a31868/speaker/0.log" Nov 24 13:12:02 crc kubenswrapper[4678]: I1124 13:12:02.437805 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mmwxw_763766a9-0307-4ba2-8545-26a817b1f410/frr/0.log" Nov 24 13:12:14 crc kubenswrapper[4678]: I1124 13:12:14.862825 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57_851f5f66-c12d-4242-aa64-12056f528f46/util/0.log" Nov 24 13:12:15 crc kubenswrapper[4678]: I1124 13:12:15.092984 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57_851f5f66-c12d-4242-aa64-12056f528f46/util/0.log" Nov 24 13:12:15 crc kubenswrapper[4678]: I1124 13:12:15.127121 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57_851f5f66-c12d-4242-aa64-12056f528f46/pull/0.log" Nov 24 13:12:15 crc kubenswrapper[4678]: I1124 13:12:15.149952 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57_851f5f66-c12d-4242-aa64-12056f528f46/pull/0.log" Nov 24 13:12:15 crc kubenswrapper[4678]: I1124 13:12:15.289888 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57_851f5f66-c12d-4242-aa64-12056f528f46/util/0.log" Nov 24 13:12:15 crc kubenswrapper[4678]: I1124 13:12:15.314444 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57_851f5f66-c12d-4242-aa64-12056f528f46/extract/0.log" Nov 24 13:12:15 crc kubenswrapper[4678]: I1124 13:12:15.351342 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8vvb57_851f5f66-c12d-4242-aa64-12056f528f46/pull/0.log" Nov 24 13:12:15 crc kubenswrapper[4678]: I1124 13:12:15.502386 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm_a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88/util/0.log" Nov 24 13:12:15 crc kubenswrapper[4678]: I1124 13:12:15.670652 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm_a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88/util/0.log" Nov 24 13:12:15 crc kubenswrapper[4678]: I1124 13:12:15.690798 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm_a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88/pull/0.log" Nov 24 13:12:15 crc kubenswrapper[4678]: I1124 13:12:15.705894 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm_a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88/pull/0.log" Nov 24 13:12:15 crc kubenswrapper[4678]: I1124 13:12:15.893454 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm_a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88/util/0.log" Nov 24 13:12:15 crc kubenswrapper[4678]: I1124 13:12:15.903527 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm_a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88/pull/0.log" Nov 24 13:12:15 crc kubenswrapper[4678]: I1124 13:12:15.904032 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewsxbm_a8bf4ef9-3b22-4f05-b27b-f0bc5afa8a88/extract/0.log" Nov 24 13:12:16 crc kubenswrapper[4678]: I1124 13:12:16.270734 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk_4e0a12a4-1d26-4559-857f-6b9d4a76924d/util/0.log" Nov 24 13:12:16 crc kubenswrapper[4678]: I1124 13:12:16.433326 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk_4e0a12a4-1d26-4559-857f-6b9d4a76924d/util/0.log" Nov 24 13:12:16 crc kubenswrapper[4678]: I1124 13:12:16.464858 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk_4e0a12a4-1d26-4559-857f-6b9d4a76924d/pull/0.log" Nov 24 13:12:16 crc kubenswrapper[4678]: I1124 13:12:16.512426 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk_4e0a12a4-1d26-4559-857f-6b9d4a76924d/pull/0.log" Nov 24 13:12:16 crc kubenswrapper[4678]: I1124 13:12:16.641148 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk_4e0a12a4-1d26-4559-857f-6b9d4a76924d/pull/0.log" Nov 24 13:12:16 crc kubenswrapper[4678]: I1124 13:12:16.689781 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk_4e0a12a4-1d26-4559-857f-6b9d4a76924d/util/0.log" Nov 24 13:12:16 crc kubenswrapper[4678]: I1124 13:12:16.699829 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102wmkk_4e0a12a4-1d26-4559-857f-6b9d4a76924d/extract/0.log" Nov 24 13:12:16 crc kubenswrapper[4678]: I1124 13:12:16.807920 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc_8249267d-adcb-4ae7-ba3d-438af2982a22/util/0.log" Nov 24 13:12:17 crc kubenswrapper[4678]: I1124 13:12:17.003205 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc_8249267d-adcb-4ae7-ba3d-438af2982a22/pull/0.log" Nov 24 13:12:17 crc kubenswrapper[4678]: I1124 13:12:17.046808 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc_8249267d-adcb-4ae7-ba3d-438af2982a22/util/0.log" Nov 24 13:12:17 crc kubenswrapper[4678]: I1124 13:12:17.060126 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc_8249267d-adcb-4ae7-ba3d-438af2982a22/pull/0.log" Nov 24 13:12:17 crc kubenswrapper[4678]: I1124 13:12:17.256497 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc_8249267d-adcb-4ae7-ba3d-438af2982a22/util/0.log" Nov 24 13:12:17 crc kubenswrapper[4678]: I1124 13:12:17.261722 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc_8249267d-adcb-4ae7-ba3d-438af2982a22/pull/0.log" Nov 24 13:12:17 crc kubenswrapper[4678]: I1124 13:12:17.282473 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f97zhc_8249267d-adcb-4ae7-ba3d-438af2982a22/extract/0.log" Nov 24 13:12:17 crc kubenswrapper[4678]: I1124 13:12:17.423763 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wk9rl_3c1aba28-e8ad-44c9-b67f-a82955ffd06c/extract-utilities/0.log" Nov 24 13:12:17 crc kubenswrapper[4678]: I1124 13:12:17.632892 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wk9rl_3c1aba28-e8ad-44c9-b67f-a82955ffd06c/extract-utilities/0.log" Nov 24 13:12:17 crc kubenswrapper[4678]: I1124 13:12:17.654743 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wk9rl_3c1aba28-e8ad-44c9-b67f-a82955ffd06c/extract-content/0.log" Nov 24 13:12:17 crc kubenswrapper[4678]: I1124 13:12:17.670617 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wk9rl_3c1aba28-e8ad-44c9-b67f-a82955ffd06c/extract-content/0.log" Nov 24 13:12:17 crc kubenswrapper[4678]: I1124 13:12:17.855232 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wk9rl_3c1aba28-e8ad-44c9-b67f-a82955ffd06c/extract-utilities/0.log" Nov 24 13:12:17 crc kubenswrapper[4678]: I1124 13:12:17.871577 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wk9rl_3c1aba28-e8ad-44c9-b67f-a82955ffd06c/extract-content/0.log" Nov 24 13:12:18 crc kubenswrapper[4678]: I1124 13:12:18.126949 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lmdm7_4686ce94-5321-49ca-b107-3f9e755495a8/extract-utilities/0.log" Nov 24 13:12:18 crc kubenswrapper[4678]: I1124 13:12:18.376644 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lmdm7_4686ce94-5321-49ca-b107-3f9e755495a8/extract-utilities/0.log" Nov 24 13:12:18 crc kubenswrapper[4678]: I1124 13:12:18.391561 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lmdm7_4686ce94-5321-49ca-b107-3f9e755495a8/extract-content/0.log" Nov 24 13:12:18 crc kubenswrapper[4678]: I1124 13:12:18.463101 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lmdm7_4686ce94-5321-49ca-b107-3f9e755495a8/extract-content/0.log" Nov 24 13:12:18 crc kubenswrapper[4678]: I1124 13:12:18.696913 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lmdm7_4686ce94-5321-49ca-b107-3f9e755495a8/extract-content/0.log" Nov 24 13:12:18 crc kubenswrapper[4678]: I1124 13:12:18.737157 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lmdm7_4686ce94-5321-49ca-b107-3f9e755495a8/extract-utilities/0.log" Nov 24 13:12:18 crc kubenswrapper[4678]: I1124 13:12:18.977000 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv_9988e41f-4dd1-473b-b0cd-4c7456b08c8d/util/0.log" Nov 24 13:12:19 crc kubenswrapper[4678]: I1124 13:12:19.226450 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv_9988e41f-4dd1-473b-b0cd-4c7456b08c8d/util/0.log" Nov 24 13:12:19 crc kubenswrapper[4678]: I1124 13:12:19.309908 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wk9rl_3c1aba28-e8ad-44c9-b67f-a82955ffd06c/registry-server/0.log" Nov 24 13:12:19 crc kubenswrapper[4678]: I1124 13:12:19.368108 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv_9988e41f-4dd1-473b-b0cd-4c7456b08c8d/pull/0.log" Nov 24 13:12:19 crc kubenswrapper[4678]: I1124 13:12:19.377887 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lmdm7_4686ce94-5321-49ca-b107-3f9e755495a8/registry-server/0.log" Nov 24 13:12:19 crc kubenswrapper[4678]: I1124 13:12:19.379625 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv_9988e41f-4dd1-473b-b0cd-4c7456b08c8d/pull/0.log" Nov 24 13:12:19 crc kubenswrapper[4678]: I1124 13:12:19.514743 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv_9988e41f-4dd1-473b-b0cd-4c7456b08c8d/util/0.log" Nov 24 13:12:19 crc kubenswrapper[4678]: I1124 13:12:19.601427 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv_9988e41f-4dd1-473b-b0cd-4c7456b08c8d/extract/0.log" Nov 24 13:12:19 crc kubenswrapper[4678]: I1124 13:12:19.604997 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6tblrv_9988e41f-4dd1-473b-b0cd-4c7456b08c8d/pull/0.log" Nov 24 13:12:19 crc kubenswrapper[4678]: I1124 13:12:19.612887 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-c2hc5_0f1b87f9-72ea-4db7-a016-17d109b58413/marketplace-operator/0.log" Nov 24 13:12:19 crc kubenswrapper[4678]: I1124 13:12:19.782938 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82hsh_2ed0e090-9ad7-42be-bfda-9c13a37fc1c7/extract-utilities/0.log" Nov 24 13:12:19 crc kubenswrapper[4678]: I1124 13:12:19.952910 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82hsh_2ed0e090-9ad7-42be-bfda-9c13a37fc1c7/extract-utilities/0.log" Nov 24 13:12:19 crc kubenswrapper[4678]: I1124 13:12:19.973734 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82hsh_2ed0e090-9ad7-42be-bfda-9c13a37fc1c7/extract-content/0.log" Nov 24 13:12:19 crc kubenswrapper[4678]: I1124 13:12:19.987647 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82hsh_2ed0e090-9ad7-42be-bfda-9c13a37fc1c7/extract-content/0.log" Nov 24 13:12:20 crc kubenswrapper[4678]: I1124 13:12:20.370932 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82hsh_2ed0e090-9ad7-42be-bfda-9c13a37fc1c7/extract-utilities/0.log" Nov 24 13:12:20 crc kubenswrapper[4678]: I1124 13:12:20.441769 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82hsh_2ed0e090-9ad7-42be-bfda-9c13a37fc1c7/extract-content/0.log" Nov 24 13:12:20 crc kubenswrapper[4678]: I1124 13:12:20.505386 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fzmwn_8c0d2913-b328-4661-8434-5e053b49589f/extract-utilities/0.log" Nov 24 13:12:20 crc kubenswrapper[4678]: I1124 13:12:20.619059 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82hsh_2ed0e090-9ad7-42be-bfda-9c13a37fc1c7/registry-server/0.log" Nov 24 13:12:20 crc kubenswrapper[4678]: I1124 13:12:20.684320 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fzmwn_8c0d2913-b328-4661-8434-5e053b49589f/extract-utilities/0.log" Nov 24 13:12:20 crc kubenswrapper[4678]: I1124 13:12:20.696423 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fzmwn_8c0d2913-b328-4661-8434-5e053b49589f/extract-content/0.log" Nov 24 13:12:20 crc kubenswrapper[4678]: I1124 13:12:20.743659 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fzmwn_8c0d2913-b328-4661-8434-5e053b49589f/extract-content/0.log" Nov 24 13:12:20 crc kubenswrapper[4678]: I1124 13:12:20.925924 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fzmwn_8c0d2913-b328-4661-8434-5e053b49589f/extract-utilities/0.log" Nov 24 13:12:20 crc kubenswrapper[4678]: I1124 13:12:20.963425 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fzmwn_8c0d2913-b328-4661-8434-5e053b49589f/extract-content/0.log" Nov 24 13:12:21 crc kubenswrapper[4678]: I1124 13:12:21.094561 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fzmwn_8c0d2913-b328-4661-8434-5e053b49589f/registry-server/0.log" Nov 24 13:12:30 crc kubenswrapper[4678]: I1124 13:12:30.297291 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:12:30 crc kubenswrapper[4678]: I1124 13:12:30.297924 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:12:33 crc kubenswrapper[4678]: I1124 13:12:33.482660 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-vp9fs_33f972c9-5774-4097-b3fd-a0adcf7f812d/prometheus-operator/0.log" Nov 24 13:12:33 crc kubenswrapper[4678]: I1124 13:12:33.667249 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c5c7cc89-dqwnn_9e2619d2-61fe-46e6-bd91-b9b2e2ab594d/prometheus-operator-admission-webhook/0.log" Nov 24 13:12:33 crc kubenswrapper[4678]: I1124 13:12:33.706320 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c5c7cc89-lb4b8_b704215d-9f17-49e2-9bed-f17a2b0388b1/prometheus-operator-admission-webhook/0.log" Nov 24 13:12:33 crc kubenswrapper[4678]: I1124 13:12:33.880653 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-tx7v7_33b87251-bed8-4721-8955-feede7c367af/operator/0.log" Nov 24 13:12:33 crc kubenswrapper[4678]: I1124 13:12:33.908996 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-sj4wp_2a26fe34-6696-484e-aba7-bf8eb21ff389/observability-ui-dashboards/0.log" Nov 24 13:12:34 crc kubenswrapper[4678]: I1124 13:12:34.077261 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-qj7c6_8eac0e32-d08f-46ca-ba1b-9c0178ec130e/perses-operator/0.log" Nov 24 13:12:46 crc kubenswrapper[4678]: I1124 13:12:46.167892 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7b9848658c-p2tjh_77532de8-8fa2-4555-a740-5b2f22acc429/manager/0.log" Nov 24 13:12:46 crc kubenswrapper[4678]: I1124 13:12:46.181542 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7b9848658c-p2tjh_77532de8-8fa2-4555-a740-5b2f22acc429/kube-rbac-proxy/0.log" Nov 24 13:13:00 crc kubenswrapper[4678]: I1124 13:13:00.297945 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:13:00 crc kubenswrapper[4678]: I1124 13:13:00.298552 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:13:00 crc kubenswrapper[4678]: I1124 13:13:00.298609 4678 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" Nov 24 13:13:00 crc kubenswrapper[4678]: I1124 13:13:00.300293 4678 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6"} pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:13:00 crc kubenswrapper[4678]: I1124 13:13:00.300372 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" containerID="cri-o://24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" gracePeriod=600 Nov 24 13:13:00 crc kubenswrapper[4678]: E1124 13:13:00.439675 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:13:01 crc kubenswrapper[4678]: I1124 13:13:01.090529 4678 generic.go:334] "Generic (PLEG): container finished" podID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" exitCode=0 Nov 24 13:13:01 crc kubenswrapper[4678]: I1124 13:13:01.090584 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerDied","Data":"24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6"} Nov 24 13:13:01 crc kubenswrapper[4678]: I1124 13:13:01.090650 4678 scope.go:117] "RemoveContainer" containerID="1d49db0a3acb427f624097f22598b79529846e1454fe47b119a335df94a836cf" Nov 24 13:13:01 crc kubenswrapper[4678]: I1124 13:13:01.092181 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:13:01 crc kubenswrapper[4678]: E1124 13:13:01.092887 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:13:12 crc kubenswrapper[4678]: I1124 13:13:12.896786 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:13:12 crc kubenswrapper[4678]: E1124 13:13:12.899640 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:13:23 crc kubenswrapper[4678]: I1124 13:13:23.896281 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:13:23 crc kubenswrapper[4678]: E1124 13:13:23.898364 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:13:35 crc kubenswrapper[4678]: I1124 13:13:35.896195 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:13:35 crc kubenswrapper[4678]: E1124 13:13:35.897118 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:13:46 crc kubenswrapper[4678]: I1124 13:13:46.896021 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:13:46 crc kubenswrapper[4678]: E1124 13:13:46.897587 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:13:59 crc kubenswrapper[4678]: I1124 13:13:59.903747 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:13:59 crc kubenswrapper[4678]: E1124 13:13:59.904568 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:14:12 crc kubenswrapper[4678]: I1124 13:14:12.895812 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:14:13 crc kubenswrapper[4678]: E1124 13:14:12.897057 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:14:24 crc kubenswrapper[4678]: I1124 13:14:24.896880 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:14:24 crc kubenswrapper[4678]: E1124 13:14:24.897994 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:14:37 crc kubenswrapper[4678]: I1124 13:14:37.896593 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:14:37 crc kubenswrapper[4678]: E1124 13:14:37.898096 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:14:46 crc kubenswrapper[4678]: I1124 13:14:46.452255 4678 generic.go:334] "Generic (PLEG): container finished" podID="928b11e7-3bbf-44d7-ad03-117642de2eca" containerID="34305d8ecef2ca2a939d7f12e254b1bbe63b561eb40292019cdb1ae02f608997" exitCode=0 Nov 24 13:14:46 crc kubenswrapper[4678]: I1124 13:14:46.452369 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lbq7m/must-gather-tf29z" event={"ID":"928b11e7-3bbf-44d7-ad03-117642de2eca","Type":"ContainerDied","Data":"34305d8ecef2ca2a939d7f12e254b1bbe63b561eb40292019cdb1ae02f608997"} Nov 24 13:14:46 crc kubenswrapper[4678]: I1124 13:14:46.454219 4678 scope.go:117] "RemoveContainer" containerID="34305d8ecef2ca2a939d7f12e254b1bbe63b561eb40292019cdb1ae02f608997" Nov 24 13:14:46 crc kubenswrapper[4678]: I1124 13:14:46.626696 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lbq7m_must-gather-tf29z_928b11e7-3bbf-44d7-ad03-117642de2eca/gather/0.log" Nov 24 13:14:51 crc kubenswrapper[4678]: I1124 13:14:51.895999 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:14:51 crc kubenswrapper[4678]: E1124 13:14:51.897384 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:14:56 crc kubenswrapper[4678]: I1124 13:14:56.465479 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lbq7m/must-gather-tf29z"] Nov 24 13:14:56 crc kubenswrapper[4678]: I1124 13:14:56.466465 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-lbq7m/must-gather-tf29z" podUID="928b11e7-3bbf-44d7-ad03-117642de2eca" containerName="copy" containerID="cri-o://87c94161442d4c2ee40a23e9d7372ef9b9a9375a31ae0039c915eecb55898894" gracePeriod=2 Nov 24 13:14:56 crc kubenswrapper[4678]: I1124 13:14:56.479769 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lbq7m/must-gather-tf29z"] Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.235786 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lbq7m_must-gather-tf29z_928b11e7-3bbf-44d7-ad03-117642de2eca/copy/0.log" Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.237290 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lbq7m/must-gather-tf29z" Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.412766 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/928b11e7-3bbf-44d7-ad03-117642de2eca-must-gather-output\") pod \"928b11e7-3bbf-44d7-ad03-117642de2eca\" (UID: \"928b11e7-3bbf-44d7-ad03-117642de2eca\") " Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.413128 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v7w5\" (UniqueName: \"kubernetes.io/projected/928b11e7-3bbf-44d7-ad03-117642de2eca-kube-api-access-6v7w5\") pod \"928b11e7-3bbf-44d7-ad03-117642de2eca\" (UID: \"928b11e7-3bbf-44d7-ad03-117642de2eca\") " Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.426376 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928b11e7-3bbf-44d7-ad03-117642de2eca-kube-api-access-6v7w5" (OuterVolumeSpecName: "kube-api-access-6v7w5") pod "928b11e7-3bbf-44d7-ad03-117642de2eca" (UID: "928b11e7-3bbf-44d7-ad03-117642de2eca"). InnerVolumeSpecName "kube-api-access-6v7w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.516888 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v7w5\" (UniqueName: \"kubernetes.io/projected/928b11e7-3bbf-44d7-ad03-117642de2eca-kube-api-access-6v7w5\") on node \"crc\" DevicePath \"\"" Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.605025 4678 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lbq7m_must-gather-tf29z_928b11e7-3bbf-44d7-ad03-117642de2eca/copy/0.log" Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.606992 4678 generic.go:334] "Generic (PLEG): container finished" podID="928b11e7-3bbf-44d7-ad03-117642de2eca" containerID="87c94161442d4c2ee40a23e9d7372ef9b9a9375a31ae0039c915eecb55898894" exitCode=143 Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.607108 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lbq7m/must-gather-tf29z" Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.607143 4678 scope.go:117] "RemoveContainer" containerID="87c94161442d4c2ee40a23e9d7372ef9b9a9375a31ae0039c915eecb55898894" Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.625374 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/928b11e7-3bbf-44d7-ad03-117642de2eca-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "928b11e7-3bbf-44d7-ad03-117642de2eca" (UID: "928b11e7-3bbf-44d7-ad03-117642de2eca"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.690862 4678 scope.go:117] "RemoveContainer" containerID="34305d8ecef2ca2a939d7f12e254b1bbe63b561eb40292019cdb1ae02f608997" Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.724193 4678 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/928b11e7-3bbf-44d7-ad03-117642de2eca-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.780858 4678 scope.go:117] "RemoveContainer" containerID="87c94161442d4c2ee40a23e9d7372ef9b9a9375a31ae0039c915eecb55898894" Nov 24 13:14:57 crc kubenswrapper[4678]: E1124 13:14:57.785161 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87c94161442d4c2ee40a23e9d7372ef9b9a9375a31ae0039c915eecb55898894\": container with ID starting with 87c94161442d4c2ee40a23e9d7372ef9b9a9375a31ae0039c915eecb55898894 not found: ID does not exist" containerID="87c94161442d4c2ee40a23e9d7372ef9b9a9375a31ae0039c915eecb55898894" Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.785205 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87c94161442d4c2ee40a23e9d7372ef9b9a9375a31ae0039c915eecb55898894"} err="failed to get container status \"87c94161442d4c2ee40a23e9d7372ef9b9a9375a31ae0039c915eecb55898894\": rpc error: code = NotFound desc = could not find container \"87c94161442d4c2ee40a23e9d7372ef9b9a9375a31ae0039c915eecb55898894\": container with ID starting with 87c94161442d4c2ee40a23e9d7372ef9b9a9375a31ae0039c915eecb55898894 not found: ID does not exist" Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.785232 4678 scope.go:117] "RemoveContainer" containerID="34305d8ecef2ca2a939d7f12e254b1bbe63b561eb40292019cdb1ae02f608997" Nov 24 13:14:57 crc kubenswrapper[4678]: E1124 13:14:57.787687 4678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34305d8ecef2ca2a939d7f12e254b1bbe63b561eb40292019cdb1ae02f608997\": container with ID starting with 34305d8ecef2ca2a939d7f12e254b1bbe63b561eb40292019cdb1ae02f608997 not found: ID does not exist" containerID="34305d8ecef2ca2a939d7f12e254b1bbe63b561eb40292019cdb1ae02f608997" Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.787764 4678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34305d8ecef2ca2a939d7f12e254b1bbe63b561eb40292019cdb1ae02f608997"} err="failed to get container status \"34305d8ecef2ca2a939d7f12e254b1bbe63b561eb40292019cdb1ae02f608997\": rpc error: code = NotFound desc = could not find container \"34305d8ecef2ca2a939d7f12e254b1bbe63b561eb40292019cdb1ae02f608997\": container with ID starting with 34305d8ecef2ca2a939d7f12e254b1bbe63b561eb40292019cdb1ae02f608997 not found: ID does not exist" Nov 24 13:14:57 crc kubenswrapper[4678]: I1124 13:14:57.930885 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="928b11e7-3bbf-44d7-ad03-117642de2eca" path="/var/lib/kubelet/pods/928b11e7-3bbf-44d7-ad03-117642de2eca/volumes" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.255779 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r"] Nov 24 13:15:00 crc kubenswrapper[4678]: E1124 13:15:00.259536 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b45d20-cc19-4c62-9c60-a42c3694aca5" containerName="extract-content" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.259611 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b45d20-cc19-4c62-9c60-a42c3694aca5" containerName="extract-content" Nov 24 13:15:00 crc kubenswrapper[4678]: E1124 13:15:00.259713 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bb37158-10fb-4049-9039-2f367592397f" containerName="extract-utilities" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.259723 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bb37158-10fb-4049-9039-2f367592397f" containerName="extract-utilities" Nov 24 13:15:00 crc kubenswrapper[4678]: E1124 13:15:00.259739 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da3fdff-5cd4-4612-b4d8-1f6e705a904b" containerName="extract-utilities" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.259750 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da3fdff-5cd4-4612-b4d8-1f6e705a904b" containerName="extract-utilities" Nov 24 13:15:00 crc kubenswrapper[4678]: E1124 13:15:00.259781 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da3fdff-5cd4-4612-b4d8-1f6e705a904b" containerName="registry-server" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.259790 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da3fdff-5cd4-4612-b4d8-1f6e705a904b" containerName="registry-server" Nov 24 13:15:00 crc kubenswrapper[4678]: E1124 13:15:00.259816 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b45d20-cc19-4c62-9c60-a42c3694aca5" containerName="registry-server" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.259823 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b45d20-cc19-4c62-9c60-a42c3694aca5" containerName="registry-server" Nov 24 13:15:00 crc kubenswrapper[4678]: E1124 13:15:00.259839 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b45d20-cc19-4c62-9c60-a42c3694aca5" containerName="extract-utilities" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.259846 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b45d20-cc19-4c62-9c60-a42c3694aca5" containerName="extract-utilities" Nov 24 13:15:00 crc kubenswrapper[4678]: E1124 13:15:00.259872 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928b11e7-3bbf-44d7-ad03-117642de2eca" containerName="gather" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.259879 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="928b11e7-3bbf-44d7-ad03-117642de2eca" containerName="gather" Nov 24 13:15:00 crc kubenswrapper[4678]: E1124 13:15:00.259892 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bb37158-10fb-4049-9039-2f367592397f" containerName="extract-content" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.259898 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bb37158-10fb-4049-9039-2f367592397f" containerName="extract-content" Nov 24 13:15:00 crc kubenswrapper[4678]: E1124 13:15:00.259911 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928b11e7-3bbf-44d7-ad03-117642de2eca" containerName="copy" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.259919 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="928b11e7-3bbf-44d7-ad03-117642de2eca" containerName="copy" Nov 24 13:15:00 crc kubenswrapper[4678]: E1124 13:15:00.259934 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da3fdff-5cd4-4612-b4d8-1f6e705a904b" containerName="extract-content" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.259941 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da3fdff-5cd4-4612-b4d8-1f6e705a904b" containerName="extract-content" Nov 24 13:15:00 crc kubenswrapper[4678]: E1124 13:15:00.259953 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bb37158-10fb-4049-9039-2f367592397f" containerName="registry-server" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.259960 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bb37158-10fb-4049-9039-2f367592397f" containerName="registry-server" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.260370 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da3fdff-5cd4-4612-b4d8-1f6e705a904b" containerName="registry-server" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.260391 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="928b11e7-3bbf-44d7-ad03-117642de2eca" containerName="gather" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.260410 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="928b11e7-3bbf-44d7-ad03-117642de2eca" containerName="copy" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.260428 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="14b45d20-cc19-4c62-9c60-a42c3694aca5" containerName="registry-server" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.260437 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bb37158-10fb-4049-9039-2f367592397f" containerName="registry-server" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.262822 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.277537 4678 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.278527 4678 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.281229 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r"] Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.416622 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17331753-f4ac-4915-86c0-2fe7f5ecb87d-secret-volume\") pod \"collect-profiles-29399835-x6m5r\" (UID: \"17331753-f4ac-4915-86c0-2fe7f5ecb87d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.417747 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2f89\" (UniqueName: \"kubernetes.io/projected/17331753-f4ac-4915-86c0-2fe7f5ecb87d-kube-api-access-t2f89\") pod \"collect-profiles-29399835-x6m5r\" (UID: \"17331753-f4ac-4915-86c0-2fe7f5ecb87d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.418022 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17331753-f4ac-4915-86c0-2fe7f5ecb87d-config-volume\") pod \"collect-profiles-29399835-x6m5r\" (UID: \"17331753-f4ac-4915-86c0-2fe7f5ecb87d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.522809 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2f89\" (UniqueName: \"kubernetes.io/projected/17331753-f4ac-4915-86c0-2fe7f5ecb87d-kube-api-access-t2f89\") pod \"collect-profiles-29399835-x6m5r\" (UID: \"17331753-f4ac-4915-86c0-2fe7f5ecb87d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.522995 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17331753-f4ac-4915-86c0-2fe7f5ecb87d-config-volume\") pod \"collect-profiles-29399835-x6m5r\" (UID: \"17331753-f4ac-4915-86c0-2fe7f5ecb87d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.523083 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17331753-f4ac-4915-86c0-2fe7f5ecb87d-secret-volume\") pod \"collect-profiles-29399835-x6m5r\" (UID: \"17331753-f4ac-4915-86c0-2fe7f5ecb87d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.524063 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17331753-f4ac-4915-86c0-2fe7f5ecb87d-config-volume\") pod \"collect-profiles-29399835-x6m5r\" (UID: \"17331753-f4ac-4915-86c0-2fe7f5ecb87d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.532738 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17331753-f4ac-4915-86c0-2fe7f5ecb87d-secret-volume\") pod \"collect-profiles-29399835-x6m5r\" (UID: \"17331753-f4ac-4915-86c0-2fe7f5ecb87d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.543913 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2f89\" (UniqueName: \"kubernetes.io/projected/17331753-f4ac-4915-86c0-2fe7f5ecb87d-kube-api-access-t2f89\") pod \"collect-profiles-29399835-x6m5r\" (UID: \"17331753-f4ac-4915-86c0-2fe7f5ecb87d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" Nov 24 13:15:00 crc kubenswrapper[4678]: I1124 13:15:00.597049 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" Nov 24 13:15:01 crc kubenswrapper[4678]: I1124 13:15:01.173707 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r"] Nov 24 13:15:01 crc kubenswrapper[4678]: I1124 13:15:01.699727 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" event={"ID":"17331753-f4ac-4915-86c0-2fe7f5ecb87d","Type":"ContainerStarted","Data":"680765a6d1986cdde62f5affb451468c4bcbcee3c297a0102f488c81fa0b5efa"} Nov 24 13:15:01 crc kubenswrapper[4678]: I1124 13:15:01.700281 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" event={"ID":"17331753-f4ac-4915-86c0-2fe7f5ecb87d","Type":"ContainerStarted","Data":"2ba6c90e3c8175c00a23d0948970621c649f547adbfb5b1c85d87d6553bc31f1"} Nov 24 13:15:01 crc kubenswrapper[4678]: I1124 13:15:01.728732 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" podStartSLOduration=1.728640078 podStartE2EDuration="1.728640078s" podCreationTimestamp="2025-11-24 13:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:15:01.71599299 +0000 UTC m=+7112.647052629" watchObservedRunningTime="2025-11-24 13:15:01.728640078 +0000 UTC m=+7112.659699717" Nov 24 13:15:02 crc kubenswrapper[4678]: I1124 13:15:02.718900 4678 generic.go:334] "Generic (PLEG): container finished" podID="17331753-f4ac-4915-86c0-2fe7f5ecb87d" containerID="680765a6d1986cdde62f5affb451468c4bcbcee3c297a0102f488c81fa0b5efa" exitCode=0 Nov 24 13:15:02 crc kubenswrapper[4678]: I1124 13:15:02.719301 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" event={"ID":"17331753-f4ac-4915-86c0-2fe7f5ecb87d","Type":"ContainerDied","Data":"680765a6d1986cdde62f5affb451468c4bcbcee3c297a0102f488c81fa0b5efa"} Nov 24 13:15:04 crc kubenswrapper[4678]: I1124 13:15:04.328597 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" Nov 24 13:15:04 crc kubenswrapper[4678]: I1124 13:15:04.375469 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17331753-f4ac-4915-86c0-2fe7f5ecb87d-secret-volume\") pod \"17331753-f4ac-4915-86c0-2fe7f5ecb87d\" (UID: \"17331753-f4ac-4915-86c0-2fe7f5ecb87d\") " Nov 24 13:15:04 crc kubenswrapper[4678]: I1124 13:15:04.375788 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2f89\" (UniqueName: \"kubernetes.io/projected/17331753-f4ac-4915-86c0-2fe7f5ecb87d-kube-api-access-t2f89\") pod \"17331753-f4ac-4915-86c0-2fe7f5ecb87d\" (UID: \"17331753-f4ac-4915-86c0-2fe7f5ecb87d\") " Nov 24 13:15:04 crc kubenswrapper[4678]: I1124 13:15:04.375841 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17331753-f4ac-4915-86c0-2fe7f5ecb87d-config-volume\") pod \"17331753-f4ac-4915-86c0-2fe7f5ecb87d\" (UID: \"17331753-f4ac-4915-86c0-2fe7f5ecb87d\") " Nov 24 13:15:04 crc kubenswrapper[4678]: I1124 13:15:04.377402 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17331753-f4ac-4915-86c0-2fe7f5ecb87d-config-volume" (OuterVolumeSpecName: "config-volume") pod "17331753-f4ac-4915-86c0-2fe7f5ecb87d" (UID: "17331753-f4ac-4915-86c0-2fe7f5ecb87d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 13:15:04 crc kubenswrapper[4678]: I1124 13:15:04.383895 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17331753-f4ac-4915-86c0-2fe7f5ecb87d-kube-api-access-t2f89" (OuterVolumeSpecName: "kube-api-access-t2f89") pod "17331753-f4ac-4915-86c0-2fe7f5ecb87d" (UID: "17331753-f4ac-4915-86c0-2fe7f5ecb87d"). InnerVolumeSpecName "kube-api-access-t2f89". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:15:04 crc kubenswrapper[4678]: I1124 13:15:04.398761 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17331753-f4ac-4915-86c0-2fe7f5ecb87d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "17331753-f4ac-4915-86c0-2fe7f5ecb87d" (UID: "17331753-f4ac-4915-86c0-2fe7f5ecb87d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:15:04 crc kubenswrapper[4678]: I1124 13:15:04.478210 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2f89\" (UniqueName: \"kubernetes.io/projected/17331753-f4ac-4915-86c0-2fe7f5ecb87d-kube-api-access-t2f89\") on node \"crc\" DevicePath \"\"" Nov 24 13:15:04 crc kubenswrapper[4678]: I1124 13:15:04.478445 4678 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17331753-f4ac-4915-86c0-2fe7f5ecb87d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 13:15:04 crc kubenswrapper[4678]: I1124 13:15:04.478458 4678 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17331753-f4ac-4915-86c0-2fe7f5ecb87d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 13:15:04 crc kubenswrapper[4678]: I1124 13:15:04.805739 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" event={"ID":"17331753-f4ac-4915-86c0-2fe7f5ecb87d","Type":"ContainerDied","Data":"2ba6c90e3c8175c00a23d0948970621c649f547adbfb5b1c85d87d6553bc31f1"} Nov 24 13:15:04 crc kubenswrapper[4678]: I1124 13:15:04.805783 4678 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ba6c90e3c8175c00a23d0948970621c649f547adbfb5b1c85d87d6553bc31f1" Nov 24 13:15:04 crc kubenswrapper[4678]: I1124 13:15:04.805894 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-x6m5r" Nov 24 13:15:04 crc kubenswrapper[4678]: I1124 13:15:04.823247 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x"] Nov 24 13:15:04 crc kubenswrapper[4678]: I1124 13:15:04.846996 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-8gc9x"] Nov 24 13:15:05 crc kubenswrapper[4678]: I1124 13:15:05.896617 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:15:05 crc kubenswrapper[4678]: E1124 13:15:05.897480 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:15:05 crc kubenswrapper[4678]: I1124 13:15:05.916259 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5444d6db-2b72-4ef3-8dc5-da0f2540e49d" path="/var/lib/kubelet/pods/5444d6db-2b72-4ef3-8dc5-da0f2540e49d/volumes" Nov 24 13:15:17 crc kubenswrapper[4678]: I1124 13:15:17.897250 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:15:17 crc kubenswrapper[4678]: E1124 13:15:17.898881 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:15:32 crc kubenswrapper[4678]: I1124 13:15:32.896596 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:15:32 crc kubenswrapper[4678]: E1124 13:15:32.898064 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:15:41 crc kubenswrapper[4678]: I1124 13:15:41.168450 4678 scope.go:117] "RemoveContainer" containerID="a38580ae46064596082351abaa10e2a8d6e2b8fb6b8481cba785c75f39814744" Nov 24 13:15:41 crc kubenswrapper[4678]: I1124 13:15:41.205418 4678 scope.go:117] "RemoveContainer" containerID="802bd43628b4f353873773dfcdc6edcdbd9c33265a89f1b3c242ac4816d1f9ba" Nov 24 13:15:44 crc kubenswrapper[4678]: I1124 13:15:44.896140 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:15:44 crc kubenswrapper[4678]: E1124 13:15:44.897328 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:15:56 crc kubenswrapper[4678]: I1124 13:15:56.895924 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:15:56 crc kubenswrapper[4678]: E1124 13:15:56.900592 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:16:11 crc kubenswrapper[4678]: I1124 13:16:11.897573 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:16:11 crc kubenswrapper[4678]: E1124 13:16:11.898763 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:16:22 crc kubenswrapper[4678]: I1124 13:16:22.896063 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:16:22 crc kubenswrapper[4678]: E1124 13:16:22.897217 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:16:36 crc kubenswrapper[4678]: I1124 13:16:36.896034 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:16:36 crc kubenswrapper[4678]: E1124 13:16:36.897284 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:16:48 crc kubenswrapper[4678]: I1124 13:16:48.896730 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:16:48 crc kubenswrapper[4678]: E1124 13:16:48.897844 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:17:01 crc kubenswrapper[4678]: I1124 13:17:01.896614 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:17:01 crc kubenswrapper[4678]: E1124 13:17:01.899536 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:17:15 crc kubenswrapper[4678]: I1124 13:17:15.896001 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:17:15 crc kubenswrapper[4678]: E1124 13:17:15.897031 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:17:29 crc kubenswrapper[4678]: I1124 13:17:29.911437 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:17:29 crc kubenswrapper[4678]: E1124 13:17:29.914793 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:17:44 crc kubenswrapper[4678]: I1124 13:17:44.896010 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:17:44 crc kubenswrapper[4678]: E1124 13:17:44.897269 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:17:59 crc kubenswrapper[4678]: I1124 13:17:59.908591 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:17:59 crc kubenswrapper[4678]: E1124 13:17:59.909945 4678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhrs6_openshift-machine-config-operator(0d7ceb4b-c0fc-4888-b251-a87db4a2665e)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" Nov 24 13:18:11 crc kubenswrapper[4678]: I1124 13:18:11.896512 4678 scope.go:117] "RemoveContainer" containerID="24ca6ba6e8391cb11416eb4b1152b29a970d1be7765d73eba6ce96b61ee254d6" Nov 24 13:18:12 crc kubenswrapper[4678]: I1124 13:18:12.350289 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" event={"ID":"0d7ceb4b-c0fc-4888-b251-a87db4a2665e","Type":"ContainerStarted","Data":"85515f0e44af0030da8c631fbf4f112a4663c0ddedc570d040f3ed31ec41e9c6"} Nov 24 13:18:59 crc kubenswrapper[4678]: I1124 13:18:59.963615 4678 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fkknb"] Nov 24 13:18:59 crc kubenswrapper[4678]: E1124 13:18:59.965215 4678 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17331753-f4ac-4915-86c0-2fe7f5ecb87d" containerName="collect-profiles" Nov 24 13:18:59 crc kubenswrapper[4678]: I1124 13:18:59.965238 4678 state_mem.go:107] "Deleted CPUSet assignment" podUID="17331753-f4ac-4915-86c0-2fe7f5ecb87d" containerName="collect-profiles" Nov 24 13:18:59 crc kubenswrapper[4678]: I1124 13:18:59.965537 4678 memory_manager.go:354] "RemoveStaleState removing state" podUID="17331753-f4ac-4915-86c0-2fe7f5ecb87d" containerName="collect-profiles" Nov 24 13:18:59 crc kubenswrapper[4678]: I1124 13:18:59.967999 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:18:59 crc kubenswrapper[4678]: I1124 13:18:59.981096 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkknb"] Nov 24 13:19:00 crc kubenswrapper[4678]: I1124 13:19:00.008085 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2stbm\" (UniqueName: \"kubernetes.io/projected/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-kube-api-access-2stbm\") pod \"redhat-marketplace-fkknb\" (UID: \"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b\") " pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:19:00 crc kubenswrapper[4678]: I1124 13:19:00.008234 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-catalog-content\") pod \"redhat-marketplace-fkknb\" (UID: \"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b\") " pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:19:00 crc kubenswrapper[4678]: I1124 13:19:00.008363 4678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-utilities\") pod \"redhat-marketplace-fkknb\" (UID: \"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b\") " pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:19:00 crc kubenswrapper[4678]: I1124 13:19:00.111763 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-catalog-content\") pod \"redhat-marketplace-fkknb\" (UID: \"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b\") " pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:19:00 crc kubenswrapper[4678]: I1124 13:19:00.112039 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-utilities\") pod \"redhat-marketplace-fkknb\" (UID: \"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b\") " pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:19:00 crc kubenswrapper[4678]: I1124 13:19:00.112237 4678 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2stbm\" (UniqueName: \"kubernetes.io/projected/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-kube-api-access-2stbm\") pod \"redhat-marketplace-fkknb\" (UID: \"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b\") " pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:19:00 crc kubenswrapper[4678]: I1124 13:19:00.112409 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-catalog-content\") pod \"redhat-marketplace-fkknb\" (UID: \"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b\") " pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:19:00 crc kubenswrapper[4678]: I1124 13:19:00.112986 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-utilities\") pod \"redhat-marketplace-fkknb\" (UID: \"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b\") " pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:19:00 crc kubenswrapper[4678]: I1124 13:19:00.135218 4678 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2stbm\" (UniqueName: \"kubernetes.io/projected/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-kube-api-access-2stbm\") pod \"redhat-marketplace-fkknb\" (UID: \"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b\") " pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:19:00 crc kubenswrapper[4678]: I1124 13:19:00.297393 4678 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:19:01 crc kubenswrapper[4678]: I1124 13:19:01.009202 4678 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkknb"] Nov 24 13:19:01 crc kubenswrapper[4678]: I1124 13:19:01.041111 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkknb" event={"ID":"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b","Type":"ContainerStarted","Data":"043c628d40d50dadf6a13eb643588a1a6676d2630937f4f24bc386b86305c25a"} Nov 24 13:19:02 crc kubenswrapper[4678]: I1124 13:19:02.063217 4678 generic.go:334] "Generic (PLEG): container finished" podID="9a6e87ff-e0a0-43dd-b753-70988ee1bd0b" containerID="79382d91d100246ffac44f4ae22cd8405c3b5e5f36c731b2f91c8ca4df2d87c6" exitCode=0 Nov 24 13:19:02 crc kubenswrapper[4678]: I1124 13:19:02.064329 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkknb" event={"ID":"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b","Type":"ContainerDied","Data":"79382d91d100246ffac44f4ae22cd8405c3b5e5f36c731b2f91c8ca4df2d87c6"} Nov 24 13:19:02 crc kubenswrapper[4678]: I1124 13:19:02.081549 4678 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 13:19:04 crc kubenswrapper[4678]: I1124 13:19:04.103580 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkknb" event={"ID":"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b","Type":"ContainerStarted","Data":"3590feacb6bd32093eacc8f1686fba3e4182295371ee7a4a3512c7b9528a2ad7"} Nov 24 13:19:05 crc kubenswrapper[4678]: I1124 13:19:05.120367 4678 generic.go:334] "Generic (PLEG): container finished" podID="9a6e87ff-e0a0-43dd-b753-70988ee1bd0b" containerID="3590feacb6bd32093eacc8f1686fba3e4182295371ee7a4a3512c7b9528a2ad7" exitCode=0 Nov 24 13:19:05 crc kubenswrapper[4678]: I1124 13:19:05.120507 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkknb" event={"ID":"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b","Type":"ContainerDied","Data":"3590feacb6bd32093eacc8f1686fba3e4182295371ee7a4a3512c7b9528a2ad7"} Nov 24 13:19:06 crc kubenswrapper[4678]: I1124 13:19:06.140305 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkknb" event={"ID":"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b","Type":"ContainerStarted","Data":"b7a9ed91d1cf93673633f1a3f02a5d9000ae256862dd0bb63c6dbbb5beb52bfd"} Nov 24 13:19:06 crc kubenswrapper[4678]: I1124 13:19:06.165323 4678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fkknb" podStartSLOduration=3.654112597 podStartE2EDuration="7.165294647s" podCreationTimestamp="2025-11-24 13:18:59 +0000 UTC" firstStartedPulling="2025-11-24 13:19:02.076141905 +0000 UTC m=+7353.007201544" lastFinishedPulling="2025-11-24 13:19:05.587323955 +0000 UTC m=+7356.518383594" observedRunningTime="2025-11-24 13:19:06.160285293 +0000 UTC m=+7357.091344942" watchObservedRunningTime="2025-11-24 13:19:06.165294647 +0000 UTC m=+7357.096354286" Nov 24 13:19:10 crc kubenswrapper[4678]: I1124 13:19:10.307605 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:19:10 crc kubenswrapper[4678]: I1124 13:19:10.308580 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:19:11 crc kubenswrapper[4678]: I1124 13:19:11.361401 4678 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fkknb" podUID="9a6e87ff-e0a0-43dd-b753-70988ee1bd0b" containerName="registry-server" probeResult="failure" output=< Nov 24 13:19:11 crc kubenswrapper[4678]: timeout: failed to connect service ":50051" within 1s Nov 24 13:19:11 crc kubenswrapper[4678]: > Nov 24 13:19:20 crc kubenswrapper[4678]: I1124 13:19:20.426331 4678 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:19:20 crc kubenswrapper[4678]: I1124 13:19:20.562599 4678 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:19:20 crc kubenswrapper[4678]: I1124 13:19:20.738818 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkknb"] Nov 24 13:19:22 crc kubenswrapper[4678]: I1124 13:19:22.388113 4678 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fkknb" podUID="9a6e87ff-e0a0-43dd-b753-70988ee1bd0b" containerName="registry-server" containerID="cri-o://b7a9ed91d1cf93673633f1a3f02a5d9000ae256862dd0bb63c6dbbb5beb52bfd" gracePeriod=2 Nov 24 13:19:24 crc kubenswrapper[4678]: I1124 13:19:24.416893 4678 generic.go:334] "Generic (PLEG): container finished" podID="9a6e87ff-e0a0-43dd-b753-70988ee1bd0b" containerID="b7a9ed91d1cf93673633f1a3f02a5d9000ae256862dd0bb63c6dbbb5beb52bfd" exitCode=0 Nov 24 13:19:24 crc kubenswrapper[4678]: I1124 13:19:24.417013 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkknb" event={"ID":"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b","Type":"ContainerDied","Data":"b7a9ed91d1cf93673633f1a3f02a5d9000ae256862dd0bb63c6dbbb5beb52bfd"} Nov 24 13:19:24 crc kubenswrapper[4678]: I1124 13:19:24.587405 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:19:24 crc kubenswrapper[4678]: I1124 13:19:24.664960 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-catalog-content\") pod \"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b\" (UID: \"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b\") " Nov 24 13:19:24 crc kubenswrapper[4678]: I1124 13:19:24.665427 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2stbm\" (UniqueName: \"kubernetes.io/projected/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-kube-api-access-2stbm\") pod \"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b\" (UID: \"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b\") " Nov 24 13:19:24 crc kubenswrapper[4678]: I1124 13:19:24.665566 4678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-utilities\") pod \"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b\" (UID: \"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b\") " Nov 24 13:19:24 crc kubenswrapper[4678]: I1124 13:19:24.666650 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-utilities" (OuterVolumeSpecName: "utilities") pod "9a6e87ff-e0a0-43dd-b753-70988ee1bd0b" (UID: "9a6e87ff-e0a0-43dd-b753-70988ee1bd0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:19:24 crc kubenswrapper[4678]: I1124 13:19:24.677340 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-kube-api-access-2stbm" (OuterVolumeSpecName: "kube-api-access-2stbm") pod "9a6e87ff-e0a0-43dd-b753-70988ee1bd0b" (UID: "9a6e87ff-e0a0-43dd-b753-70988ee1bd0b"). InnerVolumeSpecName "kube-api-access-2stbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:19:24 crc kubenswrapper[4678]: I1124 13:19:24.688474 4678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a6e87ff-e0a0-43dd-b753-70988ee1bd0b" (UID: "9a6e87ff-e0a0-43dd-b753-70988ee1bd0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:19:24 crc kubenswrapper[4678]: I1124 13:19:24.774019 4678 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:19:24 crc kubenswrapper[4678]: I1124 13:19:24.774069 4678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2stbm\" (UniqueName: \"kubernetes.io/projected/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-kube-api-access-2stbm\") on node \"crc\" DevicePath \"\"" Nov 24 13:19:24 crc kubenswrapper[4678]: I1124 13:19:24.774083 4678 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:19:25 crc kubenswrapper[4678]: I1124 13:19:25.436275 4678 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkknb" event={"ID":"9a6e87ff-e0a0-43dd-b753-70988ee1bd0b","Type":"ContainerDied","Data":"043c628d40d50dadf6a13eb643588a1a6676d2630937f4f24bc386b86305c25a"} Nov 24 13:19:25 crc kubenswrapper[4678]: I1124 13:19:25.436861 4678 scope.go:117] "RemoveContainer" containerID="b7a9ed91d1cf93673633f1a3f02a5d9000ae256862dd0bb63c6dbbb5beb52bfd" Nov 24 13:19:25 crc kubenswrapper[4678]: I1124 13:19:25.436397 4678 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fkknb" Nov 24 13:19:25 crc kubenswrapper[4678]: I1124 13:19:25.493656 4678 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkknb"] Nov 24 13:19:25 crc kubenswrapper[4678]: I1124 13:19:25.495073 4678 scope.go:117] "RemoveContainer" containerID="3590feacb6bd32093eacc8f1686fba3e4182295371ee7a4a3512c7b9528a2ad7" Nov 24 13:19:25 crc kubenswrapper[4678]: I1124 13:19:25.512938 4678 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkknb"] Nov 24 13:19:25 crc kubenswrapper[4678]: I1124 13:19:25.550072 4678 scope.go:117] "RemoveContainer" containerID="79382d91d100246ffac44f4ae22cd8405c3b5e5f36c731b2f91c8ca4df2d87c6" Nov 24 13:19:25 crc kubenswrapper[4678]: I1124 13:19:25.914387 4678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a6e87ff-e0a0-43dd-b753-70988ee1bd0b" path="/var/lib/kubelet/pods/9a6e87ff-e0a0-43dd-b753-70988ee1bd0b/volumes" Nov 24 13:20:30 crc kubenswrapper[4678]: I1124 13:20:30.297382 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:20:30 crc kubenswrapper[4678]: I1124 13:20:30.298160 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:21:00 crc kubenswrapper[4678]: I1124 13:21:00.296914 4678 patch_prober.go:28] interesting pod/machine-config-daemon-hhrs6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:21:00 crc kubenswrapper[4678]: I1124 13:21:00.297956 4678 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhrs6" podUID="0d7ceb4b-c0fc-4888-b251-a87db4a2665e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"